WA3368 Lab Guide

CI CD Using Github and Microservices Development in Python

Creating a GitHub account and Personal Access Token (PAT)

Module 1

GitHub is a web-based platform for version control, collaboration, and project management, facilitating efficient and organized software development workflows.

In this lab you will create a new personal GitHub account (or, use an existing one), and create a Personal Access Token (PAT) for use during Labs.

A Personal Access Token is a secure authentication method used in GitHub to grant access to your account’s resources and perform actions on your behalf, such as pushing code, creating repositories, or managing issues.

1.1. Decide on which GitHub account to use?

  • If you have already have a personal GitHub account, you are free to use this existing account.
  • If you choose to use your existing personal GitHub account, then Skip to Part 3 or follow Part 2 to sign up for a new GitHub account.

1.2. Sign up for a GitHub account

In this Part, you will register a new GitHub account.

Note

If you have an existing personal GitHub account, you can use it and Skip to Part 3

  1. Open a New Chrome Web Browser Tab/Window using either the main Ubuntu Applications menu located at the top of your desktop, or the Ubuntu Desktop Launcher located at the bottom of your desktop as seen in the two images below:

    Google Menu item
    Google shortcut

    Important

    If you are prompted with an “Authentication required” pop-up window, use “wasadmin” as a the password to unlock.

  2. Navigate to “https://github.com/”

    Click on "Sign up" and follow the instructions to create a new account.

    Important

    Do not use your corporate email! Use a personal email.

  3. Once you have completed the account registration, verified your email, and tested logging into your new personal GitHub account, proceed to Part 3.

1.3. Create a Personal Access Token (PAT)

In upcoming labs, we will generate repositories for lab exercises. Before then, it is essential to create a Personal Access Token (PAT), which functions as a password for accessing GitHub through the command line. This token enables us to use Git commands that interact with remote repositories on GitHub.

Create a personal access token (PAT):

  1. Log into your Personal GitHub Account.

  2. Access GitHub account settings by clicking on your profile picture located in the top-right corner and selecting "Settings".

  3. In the left sidebar-menu, locate and click on "Developer settings" (typically located at the bottom of the menu).

  4. Select "Personal access tokens".

  5. Select “Tokens (classic)”.

  6. Click on "Generate new token (classic)" button.

    Note

    If prompted for a second choice, then choose “Generate a new token (classic)”. The URL should now be “https://github.com/settings/tokens/new”

  7. Assign your token a description in the “note” field, for example “wa2917 PAT”.

  8. For the “Select scopes” field select "repo" for full control of private repositories, and also select “workflow” to allow updating of Github Action workflows as seen below:

    Github Action workflows
  9. Click on "Generate token", usually located at the bottom of the screen.

  10. Immediately copy the token by clicking on the copy-icon (we will save this token in a moment).

    Copy the token

    Important

    Make sure you copy the generated token immediately as it will not be shown again.

    If you make a mistake, then you can delete the token, and create another one. We will be using the token later in this lab, and during future labs.

    We will now create a new text file and save it on the Desktop, to store our token for re-use when needed. We will do this using Visual Studio Code (VSCode) as outlined in the next few steps.

  11. Open a New Terminal using the “Terminal” icon in the main Ubuntu Desktop Launcher as shown below.

    Open a New Terminal
  12. Create a new file in VSCode by entering the following command.

    code ~/Desktop/my_tokens.txt

    Important

    If you are prompted with an “Authentication required” pop-up window, use “wasadmin” as a the password to unlock.

  13. Enter a new line and a label/comment to identity the token such as:

    github:
  14. Press enter to append a new line after the label.

  15. Using the right-mouse context-menu paste option, “Paste” your token into the new file as in the example image below:

    Paste your token
    Paste your token

    Note

    If you are using HTTPS to remotely connect to the lab, the behavior of your Remote Desktop session can sometimes be affected by the Operating System (OS) you are using on your actual physical machine. You may have to look at alternative copy and paste methods specific to your OS.

    Remember, each OS (Windows, MacOS etc) have different supported copy/paste shortcut commands.

    The result will be similar to the image below:

    copy paste shortcut
  16. Save the file using the VSCode File > Save menu-option, then use File > Exit to close VSCode.

  17. Well done! The lab is complete.

1.4. Clean Up

  1. Close all open Terminal sessions.

  2. Close all open Web Browser windows.

  3. Close all open VSCode files & windows.

    We are now ready to use GitHub for working with Repositories and GitHub Actions.

1.5. Summary

Core topics covered:

  • Creating a GitHub account
  • Generating a Personal Access Token (PAT)
    • Accessed GitHub account settings, navigated to "Developer settings," selected "Personal access tokens," and generated a new token with appropriate scopes.

Using the GitHub Workflow

Module 2

GitHub flow is a lightweight, branch-based workflow.

In this lab you will explore the GitHub Flow workflow and see how to create Feature Branches and use Pull Requests.

Note

Many of the fundamentals covered here are vital to future lab success.

Note

This lab depends upon a previously stored Personal Access Token (PAT) from “my_tokens.txt” which should have been saved on your Lab machine’s desktop from a previous lab.

2.1. Create a new Git repository in GitHub

In this part we will create a new Git repository (repo) in GitHub. Later in the lab, you will be committing changes to a local clone of this repo to push changes made in your local workstation’s version of the repo back to GitHub.

  1. Open a New Chrome Web Browser Tab/Window using either the main Ubuntu Applications menu located at the top of your desktop, or the Ubuntu Desktop Launcher located at the bottom of your desktop as seen in the two images below:

    Open Google
    Google shortcut
  2. Sign in to your Personal GitHub Account.

    • https://github.com/

      Important

      If you do not have an existing personal GitHub, then one must be created. The steps to create an account and a Personal Access Token (PAT) are covered in a previous lab.

  3. In GitHub, click on your Profile icon on the top-right hand corner of the GitHub user-interface and select “Your repositories” from the list of menu options.

  4. To create a new repo, you can click the “New” button or look for the “+” icon in the upper-right side of the GitHub user-interface. Either of these options will present the new repo wizard.

    You can see these options highlighted in the image below:

    create a new repo
  5. Give your repository a name. Enter “github-lab1” in the “Repository name” field.

    github-lab1
  6. Write a brief description of your repository in the “Description” field for example “GitHub Lab 1”.

    GitHub Lab 1 testing
  7. Set your repository to be Private (only you and people you invite can see it).

  8. Locate the “Initialize this repository with README” section and check the “Add a README file” option, which will automatically create a README file in your repository.

  9. Check your settings match those of the image below:

    1
  10. Click “Create repository” to finish.

  11. You will then be taken to the github-lab1 repository home-page as shown below:

    Create repository

Congrats, you have created the repo in GitHub.

Don’t close this browser window, we will return to it shortly.

2.2. Clone the repo locally

The difference between a clone and a fork is that a clone creates a local copy of the repository, while a fork creates a copy of the repository on your GitHub account.

In this section, we use cloning to demonstrate the GitHub Flow workflow, but in a real-life scenario, using a fork is a common practice for contributing to open-source projects or collaborating with others.

Note

You can only Fork a repo when you are not the owner, or your account is part of an organization.

You will clone the remote GitHub repo locally on your Lab machine using the following steps:

  1. Use the Existing Terminal session you opened earlier, or create a New Terminal session if you have closed it.

  2. Create a new working folder called “Works” in your user’s home directory, using the following commands:

    cd /home/wasadmin/
    mkdir Works
    cd /home/wasadmin/Works
  3. Check you are in the “/home/wasadmin/Works” folder using the “print working directory (pwd)” command:

    pwd

    Result:

    /home/wasadmin/Works
  4. Return to the existing GitHub interface browser window - “github-lab1” repo’s home page.

  5. Click the button labeled “<> Code” and from the dialog box copy the “HTTPS” URL. It will be in the format of “https://github.com/<github-username>/github-lab1” as seen below:

    copy the HTTPS URL

    Now you can issue a “git clone <URL>” command in your Terminal session using the repo HTTPS URL that you copied above.

  6. Use the right-mouse context-menu, paste option to paste the URL into the terminal after the command “git clone“, to complete the full command below:

    git clone <pasted repo URL>

    We can see the use of right-mouse paste context menu below:

    right-mouse paste context

    The completed full-command syntax including the repo (HTTPS) URL will be as follows:

    git clone https://github.com/<your_github_username>/github-lab1.git

    Note

    Don’t forget the space (“ “) after “git clone”.

    If you have issues with right-mouse-paste, then you can you can type out the full command, making sure that you use the correct GitHub username in the URL.

  7. When prompted for your username, use your GitHub username, not your GitHub email.

    For the password we will copy your previously stored Personal Access Token (PAT) from “my_tokens.txt” which should have been saved on your Lab machine’s desktop from a previous lab.

  8. Open “my_tokens.txt” by double-clicking on it.

    Open my_tokens.txt
  9. For the password, copy your saved Personal Access Token (PAT) from the my_tokens.txt”, and paste it in the Terminal session, to complete the “git clone” command as seen in the image below:

    git clone command

    Note

    As seen in the image above, you won’t see the password as it is being pasted using the right-mouse-click-paste option, or manually typed in.

    This is a security measure to hide the password from prying eyes.

    The result will be similar to the following output:

    git clone https://github.com/<your_github_username>/github-lab1.git
    Cloning into 'github-lab1'...
    remote: Enumerating objects: 3, done.
    remote: Counting objects: 100% (3/3), done.
    remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
    Receiving objects: 100% (3/3), done.
  10. Use the ”ls” command to view the contents of the “Works” folder, which will now contain the cloned repo:

    ls -ltra

    Result:

    drwxr-x--- 21 wasadmin wasadmin 4096 Jul 3 15:34 ..
    drwxrwxr-x 3 wasadmin wasadmin 4096 Jul 3 15:58 .
    drwxrwxr-x 3 wasadmin wasadmin 4096 Jul 3 15:58 github-lab1

    Tip

    The command "ls -ltra" lists files and directories, including hidden ones, in long format with the most recently modified files shown at the end.

  11. Using the “tree” command check the contents of the “github-lab1” folder:

    tree github-lab1

    The result will be similar as seen in the output below:

    github-lab1
    └── README.md
    
    0 directories, 1 file

    Tip

    The "tree" command in Linux is used to display the directory structure in a hierarchical tree format.

    You should see that we have the “README.md” that was automatically created for us by the new repo wizard earlier in GitHub.

2.3. Setup Git configuration

In this part we will set up your Git identity. This identity will be used to when you make changes to the local repository.

  1. First navigate into the repo:

    cd github-lab1

    Note

    You should now be in the “/home/wasadmin/github-lab1” directory.

  2. Inside the cloned repository’s root directory, run the following commands to configure an example “local” Git identity for the repo:

    git config user.name "Bob"
    git config user.email "bob@example.com"

    Note

    Since this is a lab environment, we are setting a “local” config, which is only for this repo. We are using “Bob”, and “bob@example.com”, which are sample names.

    In a real-life scenario, you can set your actual name, and email.

    It is also possible to set this globally for all repos using the “--global” switch.

  3. Type the following command to view your git configuration has been successful.

    git config --local --list

    Result will be a similar output to the following:

    core.repositoryformatversion=0
    core.filemode=true
    core.bare=false
    core.logallrefupdates=true
    remote.origin.url=https://github.com/<your_github_username>/github-lab1.git
    remote.origin.fetch=+refs/heads/:refs/remotes/origin/
    branch.main.remote=origin
    branch.main.merge=refs/heads/main
    user.name=Bob
    user.email=bob@example.com

    Note

    The “main” branch is the default name of the first branch, also known as the trunk (GitHub previously defaulted to “master”).

    It is possible to set settings in both Git and GitHub if you wish to use “master” as trunk instead of “main”.

2.4. Setup Credentials Helper

To set up a simple Git Credentials Helper locally to use your Personal Access Token (PAT) automatically, follow these steps:

  1. Open a new Terminal session (or use and existing Terminal session) on your lab machine.

  2. Run the following command to configure your Git credentials:

    git config --global credential.helper store

    Note

    This command sets the credential helper to "store," which means Git will store your credentials (including your PAT) locally.

    The token will be cached in a file called “~/.git-credentials” so future calls to GitHub will no longer require authentication.

  3. Issue the “git pull” command to force a re-authentication and save your GitHub credentials into the store.

    git pull

    The result will be a prompt for your GitHub username and PAT as we did earlier.

  4. If prompts for, Login using your GitHub username, not your GitHub email. For the password use your Personal Access Token (PAT) from “my_tokens.txt” which should have been saved on your Lab machine’s desktop from a previous lab.

  5. Confirm your credentials have been stored as plain text using the cat command:

    cat ~/.git-credentials

    The result will be text similar to the following output:

    Confirm your credentials

    Note

    The credential will be in the format:
    https://<github_username>:<personal_access_token>@github.com

    Now, when you interact with remote repositories hosted on GitHub, Git will automatically use the stored credentials (PAT) for authentication.

    Important

    Storing credentials locally using the "store" credential helper may have security implications, especially if someone gains access to your local machine. This is because the credentials are store in pain text.

    It is recommended to use a more secure method of authentication, such as password protected SSH keys or third-party credential managers, if possible.

2.5. Create a Feature branch

In this part we will create a new branch, for feature development, and thus begin an example of using the steps of GitHub flow.

  1. Run the following command to create a new branch for your feature development:

    git checkout -b feature/my-feature

    Result:

    Switched to a new branch 'feature/my-feature'
  2. Use the “git branch” command to list current branches, and display the active branch:

    git branch

    The result will be as follows:

    * feature/my-feature
    main

    Tip

    The asterisk () denotes the current branch for example: “ feature/my-feature”

  3. Using VSCode, create a new file called “index.html” using the VSCode command-line tool as follows:

    code index.html

    Important

    If you are prompted with an “Authentication required” pop-up window, use “wasadmin” as a the password to unlock.

  4. Paste in the following HTML code into the “index.html” in VSCode.

    <html>
        <head>
            <title>This is the title of the webpage!</title>
        </head>
        <body>
            <p>This is an example paragraph. Anything in the <strong>body</strong> tag will appear on the page, just like this <strong>p</strong> tag and its contents.</p>
        </body>
    </html>

    The resulting “index.html” file should look the same as in the image below:

    index.html file
  5. Save the file using the VSCode menu option: File > Save then exit VSCode using File > Exit in the main VSCode menu.

    Tip

    Use the VSCode “View” menu to set “Word Wrap” on, to make the text wrap when the terminal is not wide enough and to check you have all the required information in the file.

  6. Verify that you have created “index.html”, by printing it to the screen using the “cat” command as follows:

    cat index.html
    cat command
  7. Check the status of the repo using the following command:

    git status

    The result will be as follows:

    On branch feature/my-feature
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    	index.html
    
    nothing added to commit but untracked files present (use "git add" to track)
  8. Use the following commands to stage and commit your changes:

    git add index.html
    git commit -m "Implement my feature"

    Important

    Don’t forget the period ”.” at the end, which refers to the current directory.

  9. Push the new feature branch to GitHub using the following git command:

    git push origin feature/my-feature

    The result of the “git push” will look similar to the output below:

    git push origin feature/my-feature
    Enumerating objects: 4, done.
    Counting objects: 100% (4/4), done.
    Delta compression using up to 4 threads
    Compressing objects: 100% (3/3), done.
    Writing objects: 100% (3/3), 437 bytes | 437.00 KiB/s, done.
    Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
    remote:
    remote: Create a pull request for 'feature/my-feature' on GitHub by visiting:
    remote: https://github.com/<your_github_username>/github-lab1/pull/new/feature/my-feature
    remote:
    To https://github.com/<your_github_username>/github-lab1.git
    * [new branch] feature/my-feature -> feature/my-feature

    We will now complete the rest of the flow in the GitHub Interface.

2.6. Create a pull request

Using a pull request in the release workflow adds an extra layer of collaboration, code review, and quality assurance before the changes are deployed.

It allows for better coordination and visibility during the release process, ensuring that the changes introduced in the release branch are thoroughly reviewed and tested before being merged into the main branch or the designated release branch.

We will create a Pull Request in the GitHub interface using the following steps:

  1. In the existing open Chrome Web Browser window make sure you are in the GitHub interface:
    https://github.com/

  2. Open the main repositories page, but clicking on your GitHub profile-icon in the top right-hand corner of the GitHub interface, and selecting “Your repositories” as seen below:

    Your repositories
  3. To load the “github-lab1” home page, locate and click on the “github-lab1” repo link from the repository list.

  4. Click on the "Pull requests" tab, as seen in the image below.

    Pull requests tab
  5. Click on the "New pull request" button.

  6. Select the “main” branch of the original repository as the “base” branch, and select your new feature branch (“feature/my-feature”) as the “compare” branch as seen in the example images below:

    main branch
    feature branch

    Important

    Once you have selected the two branches to compare, the “Comparing changes” page will appear.

    As we can see in the image below, the feature branch contains our recent changes, and so there is a difference (delta) between the “main” branch and the “feature/my-feature” branch.

    Recent changes

    This is where we create a pull request which will allow collaborators to review and ultimately decide to allow a merge request or not?

    Note

    If the merge request is approved, then the feature branch will be merged into main.

  7. Click on "Create pull request".

  8. In the open a pull request screen, complete the body of the request, by typing:

    Please review :-)
    Type Please review
  9. Once the comment has been added, click the “Create pull request” button again to action the request.

  10. The page will load and merge options are presented as seen below:

    merge options are presented

    We are now ready to decide how we wish to merge?

2.7. Review and merge a pull request using the Github Interface

The project owner or other collaborators can review changes, add comments, and discuss within the pull request itself.

If everything looks good, the project owner can click on "Merge pull request" to merge your changes into the main branch.

Note

If this was an actual environment and depending on the team’s deployment process, you would likely deploy the merged branch to test or production-like environments. Testing would occur to verify and ensure that the merged branch runs without any issues in the chosen deployed environment.

Assuming that testing has proven successful, we will now merge the code from the feature branch into the main branch.

We can use the GitHub user interface to issue the “Merge request” on the “origin” or we can do it on the “local remote clone” using git commands on command-line.

In the following steps we will use the GitHub Interface option.

  1. Click the “Merge pull Request” button for the pending pull requests seen below:

    Merge pull Request

    The interface will update to present the “Confirm merge” option, where it is possible to update the commit message as seen below:

    Confirm merge
  2. Leave the commit message as is for consistency, and Click on the “Confirm merge” button.

    The interface will update again with the results of the merge as seen below:

    results of the merge

    We will now make sure our local repo is updated by checking-out the “main” branch, and issuing a git pull to receive the changes from the recent merge of the “feature/my feature” branch to the “main” branch.

  3. Switch to your existing Terminal session, and issue the following git command to check out the “main” branch:

    git checkout main

    Result:

    Switched to branch 'main'
    Your branch is up to date with 'origin/main'.
  4. Issue the “git pull” command to receive the changes from GitHub

    git pull

    The result will be similar as seen in the following image:

    git pull results

    The main branch of local clone is now up to date. We can issue the “git branch” command to confirm which branch is currently the active branch.

  5. Display the active branch using the “git branch” command:

    git branch

    The result will show that the active branch is “main” denoted by the asterisk (“*”) as seen in the output below:

    feature/my-feature
    * main

    Congrats, you have completed a Pull Request, and subsequent Merge using the GitHub Interface.

2.8. Review and merge a pull request Manually

In this next part, we will perform another update to “index.html” file in the “feature/my-feature” branch locally. We will still use a Pull Request to request a merge into “main” as we did in the last part, however this time we will not use the GitHub interface for the merge action, instead we will use the command-line.

  1. In the current open Terminal session and checkout the “feature/my-feature” branch again using the “git checkout” command as follows:

    git checkout feature/my-feature

    Result:

    Switched to branch 'feature/my-feature'
    Your branch is up to date with 'origin/feature/my-feature'.
  2. Set the upstream branch for the local branch "feature/my-feature" to the remote branch "origin/feature/my-feature" using this command below.

    git branch --set-upstream-to=origin/feature/my-feature feature/my-feature

    Note

    The command above is one single line.

    Tip

    In Git, an "upstream" branch is a reference to the remote branch that your local branch is tracking. When you push and pull changes, Git knows which remote branch to synchronize with.

  3. To ensure we are synchronized with the origin, execute the "git pull" command as follows:

    git pull
  4. Open the “index.html” file again, using the VSCode command-line tool:

    code index.html
  5. Edit the “index.html” file to change the HTML Title text from “This is the title of the webpage!” to the following:

    My Web Application

    The result will be as follows:

    Change text to My Web Application
  6. Save “index.html” using File > Save, then use exit VSCode usingFile > Exit in the VSCode menu.

  7. Issue the “git status” command to see the state of the working tree:

    git status

    Result:

    git status
    On branch feature/my-feature
    Your branch is up to date with 'origin/feature/my-feature'.
    Changes not staged for commit:
    (use "git add <file>..." to update what will be committed)
    (use "git restore <file>..." to discard changes in working directory)
    modified: index.html
    no changes added to commit (use "git add" and/or "git commit -a")
  8. Stage (add) all the changes in the current directory, and commit using the following commands:

    git add .
    git commit -m "Updated Title"

    Important

    Remember to include the period (.) after the "git add" command, as shown in "git add .", to signify all files in the current directory.

    Result:

    [feature/my-feature a15b7d3] Updated Title
    1 file changed, 1 insertion(+), 1 deletion(-)
  9. Push the changes to Github (origin)

    git push

    Result:

    Enumerating objects: 5, done.
    Counting objects: 100% (5/5), done.
    Delta compression using up to 4 threads
    Compressing objects: 100% (3/3), done.
    Writing objects: 100% (3/3), 321 bytes | 321.00 KiB/s, done.
    Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
    remote: Resolving deltas: 100% (1/1), completed with 1 local object.
    To https://github.com/<your_github_username>/github-lab1.git
    916f15d..a15b7d3 feature/my-feature -> feature/my-feature
  10. Go to Github interface which should be still open in a browser window, locate and navigate to the “github-lab1” home page.

  11. Click on the "Pull requests" tab.

  12. Click the New pull request button.

  13. Change the compare branch to “feature/my-feature”

  14. Click the “Create pull request” button.

    The interface will update and you will be presented with the “Comparing changes”. And this time you will see the “delta” modifications as seen in the image below:

    Comparing changes
  15. Add a comment as follows:

    Review the title change

  16. Click the “Create pull request” button again to action the creation of the Pull Request.

  17. The result will be the presentation of the Merge page as seen below:

    Merge page

    This time we will not use GitHub to initiate the Merge, we will instead do this from the command-line in our local clone of repo.

  18. Open your existing Terminal session and issue the following commands:

    git config pull.rebase false
    git pull origin main
  19. You will be presented with a command-line editor (nano) asking for you to confirm the commit message. We do not need to change the merge commit-message, so use CTRL-X to save and leave the nano editor:

    The resulting output message will be as follows:

    * branch main -> FETCH_HEAD
    Merge made by the 'ort' strategy.
  20. Enter the following massages:

    git checkout main
    git merge --no-ff feature/my-feature
  21. You will be presented with a command-line editor (nano) asking for you to confirm the commit message as seen below:

    confirm the commit message
  22. We do not need to change the merge commit-message, so use CTRL-X to save and leave the nano editor:

    The resulting output message will be as follows:

    Merge made by the 'ort' strategy.
    index.html | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)
  23. Check the status again:

    git status

    Result:

    On branch main
    Your branch is ahead of 'origin/main' by 3 commits.
    (use "git push" to publish your local commits)
    nothing to commit, working tree clean
  24. Push the merged changes to update the “main” branch in GitHub (origin).

    git push -u origin main

    The resulting output will be similar to below:

    Enumerating objects: 2, done.
    Counting objects: 100% (2/2), done.
    Delta compression using up to 2 threads
    Compressing objects: 100% (2/2), done.
    Writing objects: 100% (2/2), 376 bytes | 376.00 KiB/s, done.
    Total 2 (delta 1), reused 0 (delta 0), pack-reused 0
    remote: Resolving deltas: 100% (1/1), done.
    To https://github.com/<your_github_username>/github-lab1.git
    69ce66b..49d75f6 main -> main
    Branch 'main' set up to track remote branch 'main' from 'origin'.

    Congrats, You have now successfully set up a Git Repo, created a feature branch, completed a pull request and merged code, thus following the essentials of the GitHub Flow workflow.

2.9. View Branches on GitHub

Now that we have created, committed, pushed and merged the new branch a few times, we can look further into the GitHub interface to see what branches currently, exist in the repo, and drill down to see the latest commits against each branch.

We will use the following steps to achieve this.

  1. Using an Existing or New Chrome browser window, open the GitHub interface:

  2. Navigate to the “github-lab1” repository on GitHub

  3. Once on the “github-lab1” repository home page, open the branches page, by clicking on the branches icon as shown below:

    github-lab1 repo
  4. In the “branches” page, click on the “feature/my-feature” branch to display the contents of the branch as shown below:

    branches page

    You will be taken to the home page of the “feature/my-feature” branch as seen below:

    feature/my-feature branch
  5. To investigate the branch commit history, click on the “History” link as shown below:

    History link

    You will then be taken to the commits page of the “feature/my-feature” branch as seen below:

    feature/my-feature branch
  6. To view the actual commit labeled with the commit message “Updated Title”, click on the commit message label as shown in the image above.

    The result will be details page for the commit. In this page, we can see the actual change made as seen in the image below:

    actual change made

    Note

    The images depicted are samples to help you navigate, and may not be exactly the same as your GitHub interface.

    As you can see, GitHub has a very intuitive interface designed for easy, and simple use.

  7. Navigate back to the “github-lab1” repo using the main link in the tab-menu as shown below:

    github-lab1 repo

    At this point we are done navigating around the “feature/my-feature” branch.

    Congratulations on finishing the fundamental aspects of GitHub Workflow; however, you have only witnessed a fraction of its functionalities, and as you continue to utilize GitHub for your projects, you will discover and adopt new approaches to enhance your experience.

2.10. Clean Up

  1. Close all open Terminal sessions.

  2. Close all open Web Browser windows.

  3. Close all open VSCode files & windows.

2.11. Summary

This Lab focused on using the GitHub Flow workflow, a lightweight branch-based workflow, to manage repository operations such as creating Branches, Pull Requests and Merges.

The following core topics were covered:

  • Creating and Cloning a repository
  • Configuring Git
    • Git configuration involved setting up the user’s name and email using git config commands, ensuring proper identification for commits.
  • Creating a feature branch
    • A new branch for feature mimicking development using the “git checkout -b” command
  • Made changes and committing
    • Necessary changes were made to files, and a new file called index.html was created with specific content. The changes were staged using “git add” and committed using “git commit -m”
  • Pushing the branch
    • We pushed feature branch to the forked repository on GitHub using the “git push” command
  • Creating a pull request
    • In the GitHub repository, navigated to the "Pull requests" tab, clicked on "New pull request," selected the appropriate branches for comparison, and created a pull request
  • Reviewing and merging the pull request
    • Reviewed the changes, with potential to add comments, and see how teams can discuss via a thread within the pull request
  • Merging into main
    • As an example, the merge was approved, and changes merged the feature branch into the main branch using git commands - “git checkout”, “git merge”, and “git push”

Python Build Script Essentials

Module 3

Build scripts play a vital role in automating essential tasks within software development and infrastructure automation. These tasks encompass various activities, such as configuring environments, installing dependencies, conducting tests, and managing deployments. By ensuring a consistent and reliable execution of these processes, build scripts greatly enhance the overall development workflows.

In this lab, we will focus on building a basic Flask application. Flask is a lightweight web framework for Python that enables quick and efficient web application development.

Through creating this application, we will explore common techniques used in build scripts for Python projects. These techniques will showcase straightforward application setup and maintenance, fostering efficient development practices and seamless integration with continuous integration and continuous deployment (CI/CD) systems.

3.1. Create a simple Flask Application

To give some context, let’s start with creating a simple Flask application.

  1. Close all unused terminal sessions from any previous labs, then create a New Terminal session.

  2. Create a new folder called “lab3” which is located at “/home/wasadmin/Works/lab3” using the following commands:

    mkdir -p /home/wasadmin/Works/lab3
  3. Navigate to the folder using the “change directory (cd)” command:

    cd /home/wasadmin/Works/lab3
  4. Launch VSCode to open using the current folder as the working directory

    code .

    Note

    Don’t forget the period “.” which signifies to open the current folder

    Tip

    If the welcome page loads, you can untick the “Show welcome page on startup”, then close the Welcome page, using the small “x” icon.

  5. Create a new file called “##my_app.py” using the new file icon in the file-explorer as shown below:

    Google menu iteM
    Google shortcut
  6. Copy & paste in the following code into “my_app.py”, and save the file using File > Save using the VSCode menu.

    from flask import Flask
    
    app = Flask(__name__)
    
    @app.route('/')
    def hello():
        return 'Hello, World!'
    
    if __name__ == '__main__':
        app.run()
  7. Have a read of the code in “my_app.py”

    Unlike many other programming languages that use braces or keywords to delineate blocks of code, Python uses indentation to determine the grouping and nesting of statements. In Python, indenting is a fundamental aspect of the language’s syntax and is used to define the structure and organization of code blocks.

    This code is using an indentation of 4 spaces for each level of indent, make sure that your code matches the image below:

    Python style indentation

    Explanation of the “my_app.py” Python code

    This basic Flask application defines a single default route ("/") and returns a "Hello, World!" message. Let’s break down the code:

    Importing the necessary module:

    from flask import Flask imports the Flask module, which provides the necessary functions and classes to create and run a Flask application.

    Creating the Flask application instance:

    app = Flask(name) creates an instance of the Flask class and assigns it to the variable app. The “name” parameter is a special Python variable that represents the name of the current module.

    Defining a route and view function:

    @app.route('/') is a decorator that tells Flask to associate the following function with the root URL ('/'). In this case, the function is hello().

    def hello(): is a view function that will be executed when a request is made to the specified route ('/'). It returns the string 'Hello, World!'.

    Running the application:

    if name == "main": is a condition that checks if the script is being executed directly (not imported as a module).

    app.run() starts the Flask development server, allowing the application to handle incoming requests.

  8. Save the file.

  9. Close the “my_app.py” using the VSCode menu: File > Exit.

3.2. Running the Flask Application

To run this Flask application, you need to execute the Python script that contains this code, which will start the Flask development server, then you can access the application by visiting “http://localhost:5000” in your web browser.

  1. In your Existing Terminal session, navigate to the “lab3” working folder:

    cd /home/wasadmin/Works/lab3
  2. Install the Flask dependency which is required to run a flask application:

    pip install Flask==2.3.2
  3. Run the flask app using the following command:

    python my_app.py

    The Flask development server will start as seen below:

    Flask dev server
  4. Using the Chrome (or Firefox) launcher located on the Desktop, or the using the main Ubuntu Applications menu, open a New Browser window and navigate to http://localhost:5000.

    Note

    If you get a prompt informing that “Authentication is required, then use “wasadmin” as the password.

    You should see the message "Hello, World!" displayed on the page as shown below:

    Hello world! shown

    Tip

    In Flask apps, the default port is 5000. This can be overridden various ways. We will see how to do this in later labs.

  5. Close the Browser window.

  6. Stop the Flask development server by using CTRL+C in the Terminal session where the app is running.

3.3. Dependency Management using a requirements file

Previously In Part you manually installed the Flask dependency for your Flask application using the pip command. However, installing multiple Python dependencies using pip can be time-consuming and repetitive.

As builders, we strive for efficiency and automation in our workflows. It would be beneficial to have a centralized file that lists all the required dependencies, along with their specific versions. This file would provide a clear overview of the dependencies being used at any given time.

To address this, Python allows you to specify dependencies in a file called "requirements.txt". This file serves as a manifest of the dependencies needed for your project. By using the command "pip install -r requirements.txt", you can automate the installation process. This command reads the requirements.txt file and installs all the necessary dependencies, including Flask.

The “requirements.txt” file not only streamlines the installation process, but also becomes a valuable tool for managing and reproducing the environment required for your Flask application. It also acts as self-documenting code, providing insight into the dependencies used in your project.

  1. Using any open terminal navigate to “/home/wasadmin/Works/lab3”

    cd /home/wasadmin/Works/lab3
  2. Open VSCode, into the current folder and create a new file called “requirements.txt” using the exact same process we used earlier to create the “my_app.py” file.

    code .
  3. Copy and paste in the following code:

    Flask==2.3.2
    # Add other dependencies as needed

    The result should be the same as the image below:

    Tip

    The hash (#) character demotes a commented line

  4. Save the “requirements.txt” file using File > Save, and exit VSCode using File > Exit.

Explanation:

By specifying Flask==2.3.2 in the “requirements.txt file”, you ensure that Flask version 2.3.2 will be installed when running “pip install -r requirements.txt”. If you omit the version specification, pip will install the latest available version of Flask for your operating system.

Including specific versions of dependencies in the “requirements.txt” file helps maintain consistency and reproducibility when working with different environments or collaborating with others.

  1. Run the following command to which version of python is being used.

    python --version

    Result:

    Python 3.10.4
  2. Lets uninstall the existing flask dependency we installed earlier by using the following command:

    pip uninstall flask==2.3.2 -y
  3. Install the Flask dependency again, but this time using “pip” and the “requirements.txt” file by using the following command:

    pip install -r requirements.txt
  4. The result will be as follows - "Successfully installed Flask":

    Successfully installed Flask

    We will now move on to learn how we can run Flask applications in various ways.

3.4. Methods of running Applications

The default, and easiest way to run a Flask application as to have the main entry point and the code base in the same file. This approach is more straightforward, since you can start your application with just one script. It’s a common pattern for small applications or during the initial development phase.

Pros:

  • Simplicity
    • Fewer files to manage and a lower complexity level make this approach more accessible to beginners.
  • Easy to understand
    • Since everything is in one file, it’s easier to understand the flow of the application.

Cons:

  • Limited scalability
    • As the application grows, it may become more difficult to manage if everything is in a single file.
  • Testing: It’s harder to run unit tests on the application if the running code is mixed with the application code.
    • Inefficient separation of concerns: Keeping the server running code mixed with application code is generally not considered a best practice as it may cause confusion as the application grows.

The example command syntax to run this kind app would be: “python my_app.py”, which we used earlier.

Running a flask Application using a wrapper

We are now going to see how to run the same application, but this time using a wrapper pattern to separate concerns. This new approach will be to separate the application code “my_app.py” from the server running code “run.py”, which is a more common pattern for larger applications.

  1. Launch VSCode and create a new file called “run.py” in the same folder as “my_app.py”

    code .
  2. Copy and paste in the following Python code:

    from my_app import app
    
    if __name__ == '__main__':
        app.run()

    Check that the new “run.py” file contains the same code as depicted below:

    See code in run.py file
  3. Save the file using File > Save, and exit VSCode using File > Exit

  4. In the same Terminal session as you used earlier, run your Flask application using the wrapper by executing the following command:

    python run.py
  5. As before, using the Chrome (or Firefox), open a New Browser window and navigate to http://localhost:5000.

    You should see the message "Hello, World!" displayed on the page as before.

  6. Once you have verified that the app is running using the wrapper approach which uses “run.py”, close the Browser window.

  7. Stop the development server by using CTRL+C in the Terminal session where the app is running.

3.5. Unit Testing: Simple Example

Unit tests are used to verify the correctness of individual components or units of code in isolation, ensuring they function as intended. By running unit tests, builders can catch bugs and errors early, maintain code quality, and confidently make changes or additions to their codebase.

Unittest is a Python module used for organizing and running unit tests. It is built-in to Python and available as part of the standard Python library. It provides a framework that helps developers write and execute tests efficiently.

In this part, we will learn how to create and run a simple unit test using the “Unittest” module.

  1. Launch VSCode using the command:

    code .
  2. Create a new file called “test.py” in the same location as “my_app,py”.

  3. Copy and Paste in the following Python code:

    import unittest
    from my_app import app
    
    app.testing = True
    
    class FlaskTests(unittest.TestCase):
        def test_hello(self):
            client = app.test_client()
            response = client.get('/')
            self.assertEqual(response.status_code, 200)
            self.assertEqual(response.data.decode('utf-8'), 'Hello, World!')
    
    if __name__ == '__main__':
        unittest.main()

    Make sure that “test.py” contains all the code, with the correct indentations, and looks like the code shown below:

  4. Save the file using File > Save, and exit VSCode using File > Exit

    Explanation of test.py

    In summary, the code in “test.py” sets up a test case for a Flask application and defines a single test method that checks if the root URL of the application returns a response with a status code of 200 and the content 'Hello, World!

    Here is a break down of the contents of “test.py”, explaining what the code is doing:

    Breakdown of “test.py”

    The code imports the unittest module, which is a framework for writing and running tests in Python.

    It also imports the app object from a module called my_app. This suggests that there is a Flask application defined in that module.

    The line app.testing = True sets a flag in the Flask app object to indicate that the application is being tested.

    The code defines a class called FlaskTests, which inherits from unittest.TestCase. This means that it’s a test case class that can contain individual test methods.

    Inside the FlaskTests class, there is a single test method called test_hello. Test methods in unittest are identified by their names starting with the word "test". This method will test a specific functionality of the Flask application.

    Within the test_hello method, it creates a client object by calling app.test_client(). This client object allows us to send HTTP requests to the Flask application.

    It then sends a GET request to the root URL ("/") of the application using client.get('/').

    The response object contains the server’s response to the request. The code uses assertions to check if the response is as expected.

    The line self.assertEqual(response.status_code, 200) checks if the response status code is 200, which indicates a successful request.

    The line self.assertEqual(response.data.decode('utf-8'), 'Hello, World!') checks if the response data, decoded as UTF-8, is equal to the string 'Hello, World!'.

    Finally, the code checks if the script is being run directly (not imported as a module) using if name == 'main', and if so, it runs the tests using unittest.main().

    Run the Unit Test

    We are now going to create a bash script to run the Unit Test. This script could be used in a CI/CD pipeline.

  5. Using your current terminal session, create a new file called “run_test.sh” using the touch command:

    touch /home/wasadmin/Works/lab3/run_test.sh
  6. Using the command-line, launch VSCode and automatically open the new (empty) “run_test.sh” file using the following command:

    code /home/wasadmin/Works/lab3/run_test.sh
  7. Copy and Paste the following bash script into “run_test.sh”:

    !/bin/bash
    python -m unittest test.py

    The “run_test.sh” script should look exactly as depicted in the image below:

    run_test shell script
  8. Save the “run.test.sh” file using File > Save, and then exit VSCode using File > Exit.

  9. Run the Unit Test by executing the following command:

    source run_test.sh

    The result will be similar to the following output:

    .
    ----------------------------------------------------------------------
    Ran 1 test in 0.006s
    
    OK

    We can see that the test has run successful.

    Tip

    We can also use the “bash run_test.sh” command as opposed to the “source run_test.sh” command so any environment changes made within the script will not affect the current shell session.

    Explanation of the contents of “test_run.sh”

    The “-m” switch is a command-line option for running a Python module as a script. It allows you to execute a module directly without specifying the file path explicitly.

    For example, when you use “python -m unittest test.py”, the “-m” switch treats unittest as a module and ”test.py” as the script to be executed within that module.

    Note

    If you don’t use the “-m” switch and instead run python “test.py”, the Python interpreter treats “test.py” as a standalone script and executes it directly. This means that the code in “test.py” will be executed from top to bottom as a script, rather than being treated as a module within the unittest framework.

    We will create more unit tests using various approaches in subsequent labs.

    The lab is now complete.

3.6. Clean Up

  1. Close all open Terminal sessions.

  2. Close all VSCode files, and windows.

  3. Close all open Web Browser windows.

3.7. Summary

This lab introduced the Flask web framework and its use in building web applications.

During the lab we created a simple Flask application example with a single route that returns "Hello, World!".

We looked at Dependency management with the use of a requirements.txt file for installing Flask and other dependencies.

We two methods to run a Flask application; either directly or using a wrapper file.

We implemented a basic Unit testing to understand covered using the in-built unittest module, with an example test for the Flask application.

Python Project Management

Module 4

In this lab, you will learn how to build and manage a Python project, handle package dependencies and run unit tests using Pip and Python Virtual Environments.

By the end of this lab, you will have a solid understanding of isolating Python project dependencies, and ensuring the quality of your code through unit testing using the “pytest” framework.

This lab is divided into several parts, each focusing on a specific aspect of Python project management.

4.1. Create directory structure for storing project files

In this part, you will create a directory structure for storing the Python project files.

  1. Close all existing open Terminal sessions from previous labs.

  2. Open a new Terminal session.

  3. Navigate to the “/home/wasadmin/Works” directory using the following command:

    cd /home/wasadmin/Works
  4. Create the “lab4” directory and change into it using the following single-line command sequence:

    mkdir lab4 && cd lab4
  5. Create directory structure for storing the Python Project code:

    mkdir -p MyProject/my_greeting_app

    Note

    This command sequence above, creates folder called “MyProject” and a child sub-folder called “my_greeting_app”.

    “MyProject” is the name of your project. It’s the base directory you will use for storing your project files.

    “my_greeting_app” is the custom Python package name.

  6. Verify the current folder structure using the “tree” command:

    tree

    The result should be similar to the following output:

    .
    └── MyProject
    └── my_greeting_app
    
    2 directories, 0 files
  7. Switch to “MyProject” directory:

    cd MyProject
  8. Use the “Print Working Directory (pwd)” command to show the current directory:

    pwd

    Result should be:

    /home/wasadmin/Works/lab4/MyProject

4.2. Create a Flask project

In this part, you will write simple Python code to create a simple flask app. There will be 2 files. One will contain a custom class (greeter.py) and the second will contain the main function which makes use of the custom class (app.py).

  1. Create “greeter.py” by using the Visual Studio (VSCode) command line shortcut:

    code my_greeting_app/greeter.py
  2. Enter following code:

    class Greeter:
    def say_hello(self):
    return "Hello, Planet Earth!"

    Tip

    Use the tab key for indentation. VSCode will detect the code is python due to the “.py” extension, and defaults to using “4 spaces” for each tab (indentation).

    Check that your code as the correct indentation as shown below:

    Python style indentation
  3. Save the “greeter.py” file using File > Save, and then exit VSCode using File > Exit.

  4. In a similar fashion, create “app.py” file using the VSCode command-line shortcut:

    code my_greeting_app/app.py
  5. Copy and Paste the following code into “app.py”:

    from flask import Flask
    from my_greeting_app.greeter import Greeter
    import os
    
    def create_app():
        app = Flask(__name__)
        greeter = Greeter()
    
        @app.route('/')
        def hello():
            greeting = greeter.say_hello()
            return greeting
        return app
    
    if __name__ == '__main__':
        app = create_app()
        app.run()

    The “app.py” file should contain the code, and have the exact same indentations as shown in the image below:

    Code in app.py file
  6. Save the “app.py” file using File > Save, and then exit VSCode using File > Exit.

  7. Create an empty “init.py” file using the touch command to make the folder a module.

    touch my_greeting_app/__init__.py

    Note

    The “__” sequence, is two underscores.

  8. View and validate that you have the correct directory structure using the tree command:

    tree

    Result:

    .
    └── my_greeting_app
        ├── __init__.py
        ├── app.py
        └── greeter.py
    
    1 directory, 3 files

    Tip

    For ubuntu, the tree command is installed using the following command sequence “apt update && apt install tree”.

You have successfully created a simple Flask project containing the “greeter.py” and “app.py” files. Now let’s move on to the next part.

4.3. Create a Python Virtual Environment

In this section of the lab, you will be setting up a Python virtual environment.

A Python virtual environment is a self-contained environment that allows you to install packages and dependencies specific to your project, without affecting the global Python environment.

By using virtual environments, you can achieve isolation from other Python projects and have the flexibility to work with different versions of Python simultaneously. This means you can have multiple projects running on different Python versions without any conflicts or compatibility issues.

Important

It is recommended that you use Virtual Environments for Python projects as much as possible.

Note

We have already pre-installed Pyenv, a third-party tool used for managing multiple Python versions on a single machine. It locates the virtual environments outside of the current working directories, and thus the environments are not accidentally checked in with code.

  1. Verify that “pyenv” is installed using the commands:

    pyenv global 3.10.4
    pyenv version

    The resulting reported python version should be “3.10.4”

  2. Now, install another specific version of Python (version 3.10.7) using the ”PyEnv” command as follows:

    pyenv install 3.10.7

    Note

    If you get a prompt that “3.10.7” already exists, you can choose “n” to skip the installation or you can type “y” to install it again.

    If you choose “y” then be patient, the installation process can take a few minutes.

  3. We require “3.10.7” as the default version of Python from now on, so set PyEnv to use version 3.10.7 by issuing the following commands:

    pyenv local 3.10.7
    pyenv version

    The reported python version should now be “3.10.7”

    Note

    When we use pyenv local <python_version> a config file is generated named “.python-version”

  4. Check the contents of the .python-version file using the “cat” command:

    cat .python-version

    Result:

    3.10.7
  5. Verify that the “pyenv-virtualenv plugin” is installed:

    pyenv virtualenv --version

    Result:

    pyenv-virtualenv 1.2.1 (python3.10 -m venv)

    Important

    If the plugin is not installed, then please speak to the instructor, because you could be using the wrong VM!

  6. Create a new Python virtual environment using “pyenv” by running the following command:

    pyenv virtualenv 3.10.7 myproject-env

    Note

    It is also possible to use the native Python command “python -m venv .venv”, which utilizes the built-in venv module in Python. It creates a virtual environment within the current directory. In the aforementioned example, the virtual environment is stored in a folder named “.venv”. This approach is part of the standard library and does not require any additional installations.
    However, in this lab, we opt to use Pyenv which is a more powerful approach and has more features.

  7. View the list of Pyenv managed Virtual Environments using the following command:

    pyenv virtualenvs

    The result will look something like:

    Successfully created pyenv

    To use a specific virtual environment we need to activate it.

    Activating the virtual environment ensures that the subsequent commands and installations are performed within the virtual environment, and not in the global python installation.

  8. Activate the virtual environment using the following command:

    pyenv activate myproject-env

    Your terminal session prompt will now show the activated virtual environment as seen in the image below:

    Activated env

    Note

    The bash prompt has changed, indicating the virtual environment
    “myproject-env” is active.

    Tip

    When the virtual environment is no longer needed, we can deactivate it using the command: “pyenv deactivate myproject-env”

4.4. Dependency Management

In this part, you will learn how to manage your project’s dependencies using a “requirements.txt file” and install them using “pip” into the activated virtual environment.

  1. Using VSCode command-line shortcut create a new file called “requirements.txt”:

    code requirements.txt
  2. Enter following single line of code, to specify the required “Flask" dependency:

    Flask
  3. Save the newly created “requirements.txt” file using File > Save, and then exit VSCode using File > Exit.

    Tip

    We wanted the latest version of Flask, so specify no version number and just enter “Flask”.

  4. Verify the folder structure using the “tree” command:

    tree

    The result should match the following output:

    .
    ├── my_greeting_app
    │   ├── __init__.py
    │   ├── app.py
    │   └── greeter.py
    └── requirements.txt
    
    1 directory, 4 files
  5. Install the dependencies listed in the “requirements.txt” file using “pip”:

    pip install -r requirements.txt

    You have now installed Flask version 2.3.2 into the virtual environment.

    In the next part, we will continue on and run the application.

4.5. Using Flask application

In this part, you will run the Flask application using two different methods:

  • Using the Python Interpreter
  • Using the Flask command-line tool

Using the Python Interpreter

  1. Run the Flask application using the Python interpreter set by the virtual environment

    python -m my_greeting_app.app
  2. Verify the application is running by opening a new browser window (Using Chrome or Firefox) and navigating to http://localhost:5000/ or http://127.0.0.1:5000/

    Note

    When launching Chrome or Firefox, you get a prompt 'Authentication required, An application wants access to the keying “Default” keyring, but it is locked', use “wasadmin” as the password.

    The result will be as shown in the image below:

    Hello, Planet Earth!
  3. Once you have verified that the application is running in the Flask Development Server and functioning as expected, locate the Terminal window where the Flask application is running, then issue CTRL-C in the Terminal to close the running flask application.

    The Flask Command-line Tool

    We will try another method (“flask run”)

  4. In the same Terminal session, set the Flask app’s entry point by setting the “$FLASK_APP” environment variable:

    export FLASK_APP=my_greeting_app/app.py
  5. Now, run the flask app using the flask command-line tool:

    flask run
  6. Verify the application is running by opening a new browser window (Using Chrome or Firefox) and navigating to http://localhost:5000/ or http://127.0.0.1:5000/.

  7. Exit the app after you see it running the second time using “CTRL-C”.

    CTRL-C

    Comparing the two approaches

    The reason to use different methods to run a Flask app, is to provide flexibility and accommodate different use cases.

    Let’s explore the reasons for using each method:

    Running the Flask application using the Python interpreter:

    • This method is useful when you want to explicitly specify the Python interpreter set by the virtual environment. It ensures that the Flask app runs within the virtual environment’s Python environment, which can be beneficial for dependency management and ensuring compatibility.
    • By using the “-m” flag followed by the module name (“my_greeting_app.app”), you execute the app module as a script.

      Running the Flask app using the Flask command-line tool:

    • The Flask command-line tool provides a convenient way to run Flask applications. It automatically sets up the required environment variables and simplifies the process.
    • By setting the “FLASK_APP” environment variable to “my_greeting_app/app.py”, you specify the entry point of your Flask app.
    • When you run “flask run”, it starts the development server with the necessary configurations.

      Using these two methods offers different advantages. Running the app directly with the Python interpreter gives you more control over the execution environment, while using the Flask command-line tool simplifies the process and sets up the environment automatically.

      By demonstrating both methods in the instructions, you are given the opportunity to understand and utilize different approaches based on your specific requirements and preferences.

      Note

      It is important to understand that there is often more then one way to achieve running Python-based apps, and it is only over time as you encounter other projects, or build your own processes, will you appreciate the possibilities.

4.6. Installing the pytest dependency

In the part, we focus is on installing the “pytest” dependency. This step is important when it comes to creating unit tests for your Python application, especially when the goal is to automate the process of building and deploying the application.

Importance of Unit Tests:

Unit tests are a vital part of software development. They allow you to verify that individual components, or units, of your code are functioning correctly. By writing tests, you can ensure that your application behaves as expected and identify any potential issues or bugs early on in the development process.

Purpose of Automation:

Automating the process of building and deploying your Python application helps streamline development workflows. It allows you to run tests automatically, ensuring that any changes you make to the codebase do not introduce new bugs or regressions. Automation also helps save time and effort by executing repetitive tasks in a consistent and reliable manner.

Pytest:

Pytest is a widely used testing framework in the Python ecosystem. It provides a simple and powerful way to write and run tests for your code. By installing “pytest” as a dependency, you gain access to a rich set of testing features and functionalities.

We will now modify the requirements.txt to include “pytest” and install using pip

  1. Open the existing “requirements.txt” file using VSCode command-line tool:

    code requirements.txt
  2. At the end of the file, append the following line:

    pytest==7.3.2

    The resulting update to “requirements.txt” will look exactly like the image below:

    requirements.txt shown
  3. Save the “requirements.txt” file, and Exit VSCode.

  4. Using the command-line, check that the file has two dependencies using the “cat” command.

    cat requirements.txt

    Result:

    Flask
    pytest==7.3.2
  5. Using Pip, install the new dependencies into the virtual environment as specified in “requirements.txt”:

    pip install -r requirements.txt

    We are now ready to create and run a Unit Test.

4.7. Running Unit Tests with PyTest

In this part, you will write a Unit Test, then use “Pytest” to run your test.

  1. Check that you are in the correct folder “/home/wasadmin/Works/lab4/MyProject” using the pwd command:

    pwd

    Result:

    /home/wasadmin/Works/lab4/MyProject
  2. Create a tests directory to contain Unit Tests:

    mkdir tests
  3. Copy the sample template unit test files from “/home/wasadmin/Student_Templates/lab4/MyProject/tests” into the tests folder using the following command:

    cp -r /home/wasadmin/Student_Templates/lab4/MyProject/tests/* tests
  4. Using the tree command verify that you have two new files in the test folder:

    tree

    You should now see a folder structure as follows:

    .
    ├── my_greeting_app
    │   ├── __init__.py
    │   ├── __pycache__
    │   │   ├── __init__.cpython-310.pyc
    │   │   ├── app.cpython-310.pyc
    │   │   └── greeter.cpython-310.pyc
    │   ├── app.py
    │   └── greeter.py
    ├── requirements.txt
    └── tests
    ├── __init__.py
    └── test_hello.py
    
    3 directories, 9 files
  5. Open the “test_hello.py” file, and have a quick read through.

    code tests/test_hello.py

    Note

    You do not have to fully understand the code, but it is a good example of how to write a Unit Test that is designed to be run by Pytest.

    An Explanation of “test_hello.py”

    Importing necessary modules:

    • The code begins by importing the required modules: “pytest” for writing tests, “url_for” from Flask for generating URLs, “create_app” function to create the Flask application instance, and Greeter class from the “my_greeting_app.greeter” module.

      Setting up fixtures:

    • Fixtures in pytest are used to provide reusable setup and teardown code for tests. Two fixtures are defined: “app” and “client”.
    • The app fixture creates the Flask application instance using the create_app() function. It sets the TESTING configuration flag to True and configures the server name as localhost:5000. It returns the created app.
    • The client fixture uses the app fixture to create a test client for making requests to the Flask app. It yields the client, allowing the test code to use it.
    • By using fixtures, we can ensure that each test has access to a clean and configured Flask app and a client to interact with it.

      Writing a test:

    • The test_hello function is a unit test that verifies the behavior of the “/hello route” of the Flask app.
    • Inside the test function, the client fixture is used to make a GET request to the “/hello| route using “url_for('hello')”, which generates the URL for the hello endpoint.
    • Assertions are used to check the response received from the server. It asserts that the response status code is 200 and the response data is “b’Hello, World!'”.

      Overall, this code demonstrates how to set up fixtures to create a Flask app and client for testing and includes a simple test case that asserts the expected behavior of the “/hello” route. You could expand upon this example to write additional tests for other endpoints or application functionality.

  6. Close the “test_hello.py” file using VSCode menu option: File > Exit.

  7. Change to the my_greeting_app folder.

    cd my_greeting_app
  8. Run the test by issuing the “pytest” command as follows:

    pytest

    The result will be a failed test as seen below:

    Failed test

    If we look closely as the AssertionError we can see that the test failed because the app code outputs “Hello, Planet Earth!” but the test is looking for “Hello, World!”. We will now fix the test.

  9. Open “test_hello.py” using the VSCode command-line:

    code /home/wasadmin/MyProject/tests/test_hello.py
  10. Change the Unit Test code to test for “Hello, Planet Earth!” instead of “Hello, World!”. To do this comment out the existing line by adding a “#” at the beginning of the existing line and add a new line “assert res.data == b’Hello, Planet Earth!'” as shown below:

    Fixed text
  11. Save the file using File > Save, and Exit VSCode using File > Exit

  12. Re-run the test using the “pytest” command in the my_greeting_app folder:

    pytest

    The result will be a successful as shown below:

    How did the “pytest” command know where to find the test?

    Pytest follows certain conventions to discover and run tests. In the lab you provided, the test file is named “test_hello.py”. The naming convention test_*.py helps “pytest” automatically recognize the file as a test module.

    When you run “pytest” from the command line in the directory containing your tests, it automatically scans for files matching the test file naming pattern and collects all the test functions within those files.

    Here’s a breakdown of how Pytest discovers the tests:

    File Naming Convention:

    • Test files should be named starting with test_ or ending with _test.
    • By adhering to this convention, “pytest” recognizes the file as a test module and attempts to collect tests from it.

      Test Functions:

    • “pytest” looks for test functions within the test files.
    • Test functions are identified by their names starting with test_.
    • In the code you provided, the test_hello function is a test function that “pytest” discovers and executes.

      Test Discovery:

    • When you run “pytest” without specifying a specific file or directory, it automatically discovers and runs all the tests it can find in the current directory and its subdirectories.
    • “pytest” recursively searches for files matching the test file naming convention and collects all the test functions within those files.

      Important

      By following these conventions, “pytest” knows where to find the tests and executes them accordingly. If you have multiple test files or a directory structure containing tests, “pytest” will automatically discover and run all the relevant tests during execution.

4.8. Adding reporting

During the process of running unit tests, it is frequently necessary to generate reports that can be utilized in build pipeline stages to provide and audit trail of whether the unit tests have passed before proceeding to the remaining stages. To achieve this, we can make use of the “pytest-html” plugin, which enables the generation of HTML reports for our tests.

We will proceed by installing the “pytest-html” plugin and generating a report manually.

  1. Install the “pytest-html” Plugin, using the pip command as follows:

    pip install pytest-html

    Note

    In this example, as have manually issued the pip command to install the “pytest_html” dependency. We could have installed the “pytest_html” plugin dependency with the use of a requirements.txt file as we did with the “pytest” framework earlier in the lab.

  2. Generate the HTML report by running the following command:

    pytest --html=report.html

    The result will be generated report in a file called “report.html” as seen below:

    Once the command is executed successfully, you can navigate to the file system using a file explorer, locate the “report.html” file, and open it in a web browser to view the generated HTML report with the detailed test results.

  3. Open the File-Explorer app using the Desktop Launcher as seen below:

    Use File Explorer
  4. Navigate to the “Home/Works/lab4/MyProject folder as seen below:

    Open the folder

    Important

    In Ubuntu Desktop, there can seem to be a discrepancy between the file paths shown in the File Explorer desktop app and the actual file system paths. This is due to the way the $HOME environment variable is set.
    In the File Explorer desktop app, the path might be displayed as "Home/Works/lab4/MyProject", but the actual file system path is "/home/wasadmin/Works/lab4/MyProject".
    The reason for this difference is that the $HOME environment variable sets the user’s home folder, which in this case is "/home/wasadmin".
    So, while the File Explorer app displays the path as "Home/Works/lab4/MyProject", it is actually referring to the equivalent file system path "/home/wasadmin/Works/lab4/MyProject".
    It’s important to be aware of this discrepancy to ensure you are navigating and accessing the correct file locations when working with files and directories in Ubuntu Desktop.

  5. To view the generated Unit Test report, open the “report.html” file using the right-mouse click option, and select “Open With Google Chrome” as shown in the image below:

    “Open With Google Chrome”

    The result will be as follows:

    Report.html results

    In conclusion, generating HTML reports for unit tests using the "pytest-html" plugin is an essential step in the development process. By installing the plugin and executing the appropriate command, you can obtain a detailed report that provides insights into the test results.

    The lab is complete.

4.9. Clean Up

  1. Close all open Terminal sessions.

  2. Close all open Web Browser windows.

  3. Close all open VSCode files & windows.

4.10. Lab Summary

In this lab, you covered the following topics:

  • Created a directory structure for storing project files:
    • You set up a well-organized directory structure to manage your Python files effectively.
  • Created a Flask project:
    • You learned how to create a simple Flask application consisting of two files, "greeter.py" and "app.py," to get started with web development and gained hands-on experience in executing your code effectively.
  • Created a Python virtual environment:
    • You understood the importance of isolating project dependencies and learned how to create a virtual environment for your Python project.
  • Ran the Flask app:
    • You ran the Flask application and ensured that it functioned as expected, confirming the successful setup of your web application.
  • Managed dependencies:
    • You explored the use of a “requirements.txt” file to declare and manage your project’s package dependencies, and learned how to install them using Pip.
  • Ran unit tests with “pytest”:
    • You wrote and executed unit tests using the “pytest” framework, a powerful tool for ensuring the correctness of your code and maintaining code quality throughout your project.
  • You utilized the "pytest-html" plugin to generate an HTML report that provided detailed information about your test results.
    • This report allowed you to analyze the outcome of the tests facilitating the identification of any issues which aid in the overall assessment of your codebase.

By completing this lab, you developed a solid understanding of Python project management techniques, dependency isolation, and effective testing practices.

GitHub Actions - Introduction

Module 5

Welcome to this lab, which introduces GitHub Actions.

GitHub Actions is a powerful tool that automates various stages of a CI/CD pipeline. In this lab, you will gain hands-on experience by setting up a workflow that automatically builds and tests a Python application whenever changes are pushed to your GitHub repository.

We will use a simple Python app to keep things straightforward and focus on the workflow itself.

5.1. Lab setup

In this part we will create a New Repository in your own GitHub account.

Before starting this lab, we need to make sure you have an active GitHub account and Personal Access Token (PAT) which are created in a previous lab. If you have not yet created a GitHub account or PAT, then do so before continuing.

  1. Log into your personal GitHub account which you are using for this lab.

    Note

    We recommend that for labs involving GitHub you use a personal GitHub account as to not conflict with the organization you work for.

  2. Navigate to the main "Dashboard” (https://github.com/dashboard) of your GitHub account. You do this by clicking on the Dashboard link in the main top navigation toolbar as seen below:

    or, by using the left-hand-side hamburger menu as seen below, and selecting “Home”.

    Open hamburger menu
  3. Either use the global quick toolbar "+" "button, or from the left-hand-side menu choose the "New" button, or if shown “Create a new repository” button, as seen in the image below.

    “Create a new repository”
  4. In the "Create a new repository" page (https://github.com/new) enter the following name for the Repository:

    github-actions-lab5
  5. Add a description for the repo, such as:

    Lab 5 GitHub Actions

    Important

    We named the repo “github-actions-lab5”. Please ensure that you name the repo exactly as asked to ensure the lab instructions work.

  6. Choose “Private” repo.

    Important

    GitHub typically defaults to “Public” repos, so make sure you choose “Private” otherwise this repo becomes public.

  7. Click "Create repository" button to create the new repo.

    You will then be redirected to the home page of the repo. The URL of the repo home page will follow the syntax outlined below:
    https://github.com/<your_github_username>/github-actions-lab5

5.2. Clone the repository locally

In this part we will clone the repository locally, allowing you to work on the required files in your local file-system on your lab machine.

As you make changes throughout the lab, you will commit these changes and push them to the Github origin.

Note

This part of the lab requires that you have created Personal Access token (PAT) to use as a password to access repos in your GitHub account. This is done in a previous lab. If not not have an existing PAT, please create one now before continuing.

  1. Open a new Terminal session, and make sure you are in your located your user’s home directory using the following command:

    cd $HOME/Works

    You should now be in the “/home/wasadmin/Works” directory.

  2. Check you have a directory called “/home/wasadmin/Works” using the “pwd” command:

    pwd

    Result:

    /home/wasadmin/Works

    Note

    If it has not already been created, then create it using the command: “mkdir -p /home/wasadmin/Works”

  3. Clone the remote GitHub repo using the following command syntax, but replace “<your_github_username>” placeholder with your actual GitHub username:

    git clone https://github.com/<your_github_username>/github-actions-lab5.git

    You can see an example of the resulting output below:

    Cloning into 'github-actions-lab5'...
    warning: You appear to have cloned an empty repository.

    Tip

    It is also possible to copy the HTTPS link used to clone your repo, by using the copy-link icon located in the “<> Code” tab.

  4. Navigate into the repo root using the “cd” command as follows:

    cd github-actions-lab5
  5. Set the “global” git user.name, and user.name fields for all repos using the following commands:

    git config --global user.name "Bob"
    git config --global user.email bob@example.com

    We will now copy existing template code into this repository, commit, and push the changes up to GitHub.

  6. To copy the required template files in to the current directory by issuing the following command:

    cp -R /home/wasadmin/Student_Templates/lab5/* .
  7. Check that the files have been copied and are in the current folder by using the “tree” command:

    tree

    You should see the following files:

    .
    ├── MyProject
    │    ├── main.py
    │    └── my_greeting_app
    │    ├── __init__.py
    │    └── greeter.py
    ├── main.initial.yaml
    └── main.solution.yaml
    
    2 directories, 5 files
  8. Check the repo status using the following command:

    git status

    The result will be similar to the output in the following image:

    git status shown
  9. Stage all new files, and commit them to GitHub by using the following command sequence:

    git add .
    git commit -m "Add initial files"
    git push

    Note

    Your GitHub credentials should be cached from the initial GitHub lab, however If prompted type for your GitHub username (not your email) and use a Personal Access Token (PAT) as the password.

    The result will look similar to the following output below:

    [main (root-commit) 3003bf2] Add initial files
     5 files changed, 75 insertions(+)
     create mode 100644 MyProject/main.py
     create mode 100644 MyProject/my_greeting_app/__init__.py
     create mode 100644 MyProject/my_greeting_app/greeter.py
     create mode 100644 main.initial.yaml
     create mode 100644 main.solution.yaml
    Enumerating objects: 9, done.
    Counting objects: 100% (9/9), done.
    Delta compression using up to 4 threads
    Compressing objects: 100% (8/8), done.
    Writing objects: 100% (9/9), 1.23 KiB | 1.23 MiB/s, done.
    Total 9 (delta 1), reused 0 (delta 0), pack-reused 0
    remote: Resolving deltas: 100% (1/1), done.
    To https://github.com/<your_github_username>/github-actions-lab5.git
     * [new branch]      main -> main

    We are now ready to configure the repo for a GitHub Actions Workflow.

5.3. Setting up Workflow and File Folders in GitHub

In this part, you will learn how to organize your GitHub repository using workflow and file folders. This will help you better manage your project files and streamline your development process.

  1. Using the same Terminal session we used above, ensure that you are in the “/home/wasadmin/Works/github-actions-lab5” directory (the repo root) using the following command:

    cd /home/wasadmin/Works/github-actions-lab5

    Note

    If you don’t have a terminal session open, the open a new one and navigate to the “/home/wasadmin/Works/github-actions-lab5” directory.

  2. Create a new folder named "workflows" within a folder called “.github” using the following command:

    mkdir -p .github/workflows

    REMINDER: When we use the “-p” switch with the mkdir command, it means the entire multi-folder path is created.

  3. Use the “tree” command to verify the correct structure exists:

    tree .github

    Result:

    .github
    └── workflows
    
    1 directory, 0 files

    We will now create a GitHub Actions workflow file, and view it in the console.

  4. Copy the file called “main.initial.yaml” to create a new file called “main.yaml” into the new workflows folder you created. Use the following command:

    cp main.initial.yaml .github/workflows/main.yaml
  5. Display the contents of the file using the “cat” command:

    cat .github/workflows/main.yaml

    The result will be as follows:

    name: Python App CI  # Name of the workflow
    
    on:
      push:
        branches: [main]  # Trigger the workflow on push events to the main branch
      pull_request:
        branches: [main]  # Trigger the workflow on pull request events targeting the main branch
    
    jobs:
      build:
        runs-on: ubuntu-latest  # Specify the operating system for the job to run on
    
        steps:
        - name: Checkout repository  # Checkout the repository code
          uses: actions/checkout@v2  # Use the actions/checkout action
    
        - name: Set up Python  # Set up the Python environment
          uses: actions/setup-python@v2  # Use the actions/setup-python action
          with:
            python-version: 3.x  # Specify the Python version as 3.x
    
        - name: Run the app  # Execute the simple python application (app.py)
          working-directory: MyProject
          env:
            PORT: 5005 #Example of how to set the port to 5005 via an environment variable
          # Executed the command "python main.py" (example of single command in runner shell)
          run: |
            python main.py
  6. Open the YAML file (main.yaml) in Visual Studio Code (VSCode) using the VSCode command-line tool:

    code .github/workflows/main.yaml

    The result is “main.yaml” will be displayed in the VSCode editor as seen below:

    “main.yaml file in VSCode”

    Have a read through the file (main.yaml) noting that the file is in YAML format, which is recognized by VSCode. Currently, the YAML default of 2 spaces for indentation is being used.

    However, in Python, the default convention is 4 spaces, while YAML maintains the default convention of 2 spaces.

    It’s important to understand that you have the flexibility to choose any spacing you prefer in both Python and YAML, but it is crucial to maintain consistency throughout the entire file.

    Tip

    Using the default conventions is recommended as it aligns with industry standards, the majority of Python and YAML files adhere to the default conventions.

Demystifying the workflow file (main.yaml)

This simple workflow (main.yaml) is activated on push and pull requests to the main branch, executing the following tasks:

  • Cloning the repository
  • Run a multi-line inline script
  • Run a single command

The workflow is initiated whenever changes are pushed to the "main" branch.

It consists of a single job named "build" that operates in an Ubuntu environment.

This job includes multiple steps, each performing specific tasks, executed in the order defined.

Let’s break down the structure and key elements of the main.yaml file:

name: Python App CI

The name field specifies the name of the workflow. In this case, it is set to "Python App CI", which provides a descriptive name for the workflow.

on:
    push:
        branches: [main]
    pull_request:
        branches: [main]

The on section defines the events that trigger the workflow. In this case, the workflow will be triggered on both push events and pull request events targeting the main branch. This means that whenever code is pushed to the main branch or a pull request is created or updated for the main branch, this workflow will run.

jobs:
    build:
        runs-on: ubuntu-latest

The jobs section defines one or more jobs that make up the workflow. In this case, there is a single job named build. The runs-on field specifies the type of runner that will execute the job. In this example, it is set to “ubuntu-latest”, indicating that the job will run on an Ubuntu environment.

steps:
- name: Checkout repository
  uses: actions/checkout@v2

The steps section contains a list of individual steps that make up the job. Each step represents a specific action to be performed. In this example, the first step is named "Checkout repository" and it uses the actions/checkout action, which retrieves the repository code for the workflow to work with.

- name: Set up Python
  uses: actions/setup-python@v2
  with:
    python-version: 3.x

This step is named "Set up Python" and uses the actions/setup-python action. It sets up the Python environment for the subsequent steps to run. The with block is used to provide additional configuration parameters to the action. In this case, it specifies the desired Python version as 3.x.

- name: Run the app  # Execute the simple python application (app.py)
    working-directory: MyProject
    env:
    PORT: 5005 #Example of how to set the port to 5005 via an environment variable
    # Executed the command "python main.py" (example of single command in runner shell)
    run: |
    python main.py

The step is named "Run the app" and sets a working directory before executing a command using the “run” field.

The “working-directory” directive is used to specify the directory where the subsequent commands are to be executed. Here, the “working-directory” is set to “MyProject”, which means the “run” command will be executed inside the “MyProject” directory.

The command that is to be run is “python main.py”. This command is provided in the “run” field. It executes the Python script named “main.py”, effectively running your Python application.

An “env” field is also present, defining an environment variable “PORT” set to “5005”. This environment variable is used to override the default port (5000) used by your Python application, defining the port it should listen on.

Note

In a GitHub Actions Workflow, the working-directory field must be specified for each step individually if you want to change the working directory context. It does not carry over automatically to subsequent steps.

We are now ready to commit the new changes we have made, aka the GitHub Actions Workflow as specified in “main.yaml”

5.4. Commit the workflow changes

In this part, we will commit our new workflow, and push to GitHub, thus triggering the Workflow.

  1. Close VSCode using File > Exit being sure to discard any changes you may have made to “main.yaml” while you were reading it.

  2. In your Existing Terminal session, issue the following git commands to commit, and push the new workflow to GitHub:

    git status
    git add .
    git commit -m "Add workflow file"
    git push

    Tip

    Use the “git status” command to check the status of the repo before you issue git commands.

    We will now move on to check the status of the Workflow in the GitHub Actions interface.

5.5. Monitor the Workflow Execution

You can observe the execution status of all workflows in your repository at any given time, including monitoring the execution of individual steps within an active runner. When a workflow is being executed in a runner, the workflow output displays the progress of each step, including any possible errors or warnings. If the workflow is executed successfully, it concludes with a green tick next to the corresponding run.

  1. To monitor your workflow execution, navigate to the “Actions” tab in your GitHub repository, as shown in the image below:

    “Actions tab

    Note

    The URL will be in the following format:
    https://github.com/<your_github_username>/github-actions-lab5/actions

    As we can see in the image above, all the runs from each workflow in your repository will be displayed here.

    If you only see “red” run entries, it means there might be a mistake in your YAML file. Please ensure that it matches the example provided.

    If the YAML syntax is correct, a “green” checkmark will appear next to the most recent event (or workflow run event) entry, which indicates a successful execution of the workflow.

  2. Wait for the job titled “Add workflow file” to complete successfully.

    We can now move on, and update the workflow (main.yaml) to trigger another run.

5.6. Trigger a build using a commit

Lets change the workflow to add a new comment to show in the output when the workflow is run.

  1. In a New or the Existing terminal session (if you have not closed the previous terminal session) check you are in the “/home/wasadmin/Works/github-actions-lab5” directory:

    cd /home/wasadmin/Works/github-actions-lab5
  2. Open the “main.yaml” using the VSCode command-line shortcut:

    code .github/workflows/main.yaml
  3. Modify the existing YAML code to append a newline after the line that contains “python main.py” (around line 28), then append a new step (around line 30), using the following YAML.

    - name: Status
      run: echo "Job Completed"

    Make sure the indentation is correct as shown in the image below:

    YAML Style indentation
  4. Save the changes using VSCode menu: File > Save, and exit VSCode using File > Exit

  5. Commit, and push the changes to GitHub

    git status
    git add .
    git commit -m "Updated workflow file"
    git push
  6. Check out the updated workflow in GitHub Actions as we did before, and see that there will be a new run logged.

    updated workflow in GitHub Actions

    As seen in the image above, there is a new workflow-run logged titled with the commit message of the last commit.

  7. Wait for the workflow to complete.

  8. Click on the Workflow run titled “Updated workflow file”, or the latest one if you have made other commit using a different message:

    You will then be taken to the details page of the run you have just clicked on.

  9. Locate the “main.yaml” “on:push” event, and click on the “build” job as seen below:

    “build job
  10. The build output will be displayed result will be similar to the following image:

    The build output
  11. Expand the “build” job’s step labeled “Status” to see the output message we added to the Workflow:

    The result will be as follows:

    The updated Workflow shown

Kudos! You’ve successfully created a simple GitHub Actions workflow.

+ NOTE: This workflow could be further tailored and enhanced to meet the specific requirements of the application’s build/deploy process.

+ We will look at further examples, in the following labs.

5.7. Clean up

  1. On your lab machine, close all instances of VSCode Windows, any Browser Windows, and any remaining open Terminal sessions, ready for the next lab.

Once you have completed the lab, feel free to have a look around GitHub, and become more familiar with the GitHub Actions interface.

5.8. Summary

In this lab, we explored the fundamentals of GitHub Actions and how they can be utilized to automate various stages of a CI/CD pipeline for Python projects.

The lab focused on setting up a workflow that automatically builds and tests a Python application whenever changes are pushed to a GitHub repository.

By completing this lab, you achieved the following learning outcomes:

  • Implemented a basic GitHub Actions workflow for Python projects
  • Gained familiarity with the structure and syntax of workflow YAML files
  • Developed skills in managing and monitoring workflows

Remember to continue exploring GitHub Actions and its advanced features to enhance your CI/CD processes.

GitHub Actions - Simple Pipeline

Module 6

Welcome to this lab, which covers how to create a multi-job pipeline in GitHub Actions.

GitHub Actions is a powerful tool that automates various stages of a CI/CD pipeline. In this lab, you will gain hands-on experience by setting up a workflow that specifies a multi-job pipeline.

A pipeline is simply a set of workflow stages (jobs) which automatically run as part of the pipeline whenever changes are pushed to your GitHub repository.

We will use the pipeline to build and test a simple Python Flask app.

6.1. Lab setup

In this part we will create a New Repository in your own GitHub account.

Before starting this lab, we need to make sure you have an active GitHub account and Personal Access Token (PAT) which are created in a previous lab. If you have not yet created a GitHub account or PAT, then do so before continuing.

  1. Log into your personal GitHub account which you are using for this lab.

    Note

    We recommend that for labs involving GitHub you use a personal GitHub account as to not conflict with the organization you work for.

  2. Navigate to the main "Dashboard” (https://github.com/dashboard) of your GitHub account. You do this by clicking on the Dashboard link in the main top navigation toolbar as seen below:

    10000001000001390000008DC187E1CE716D65DA

    or, by using the left-hand-side hamburger menu as seen below, and selecting “Home”.

    100000010000014E000001199DD35E1BA9EBB03C
  3. Either use the global quick toolbar "+" "button, or from the left-hand-side menu choose the "New" button, or if shown “Create a new repository” button, as seen in the image below.

    10000001000003A6000002213882DE4E130632A6
  4. In the "Create a new repository" page (https://github.com/new) type “github-actions-lab6” for the Repository name:

    github-actions-lab6
  5. Optionally add a description such as

    Lab 6 GitHub Actions

    Important

    We named the repo “github-actions-lab6”. Please ensure that you name the repo exactly as asked to ensure the lab instructions work.

  6. Choose “Private” repo.

    Important

    GitHub typically defaults to “Public” repos, so make sure you choose “Private” otherwise this repo becomes public.

  7. Click "Create repository" button to create the new repo.

    You will then be redirected to the home page of the repo. The URL of the repo home page will follow the syntax outlined below:
    https://github.com/<your_github_username>/github-actions-lab6

6.2. Clone the repository locally

In this part we will clone the repository locally, allowing you to work on the required files in your local file-system on your lab machine.

As you make changes throughout the lab, you will commit these changes and push them to the Github origin.

Note

This part of the lab requires that you have created Personal Access token to access repos in your GitHub account. If not not, do so now.

  1. Open a new Terminal session, and make sure you are in your located your user’s home directory using the following command:

    cd $HOME/Works

    You should now be in the “/home/wasadmin/Works” directory.

  2. Check you have a directory called “/home/wasadmin/Works” using the “pwd” command:

    pwd

    Result:

    /home/wasadmin/Works

    Note

    If it has not already been created, then create it using the command: “mkdir -p /home/wasadmin/Works”

  3. Clone the remote GitHub repo using the following command syntax, but replace “<your_github_username>” placeholder with your actual GitHub username:

    git clone https://github.com/<your_github_username>/github-actions-lab6.git

    Tip

    It is also possible to copy the HTTPS link used to clone your repo, by using the copy-link icon located in the "<> Code" tab.

    You can see an example of the resulting output below:

    Cloning into 'github-actions-lab6'...
    warning: You appear to have cloned an empty repository.
  4. Navigate into the repo root using the “cd” command as follows:

    cd github-actions-lab6
  5. Verify the global git user.name, and user.name fields are set for all repos using the following commands:

    git config --global --list

    The Result will be similar to the following output:

    credential.helper=store
    user.name=Bob
    user.email=bob@example.com

    If the user.name, and user.email fields are not set, then issue the following commands:

    git config --global user.name "Bob"
    git config --global user.email bob@example.com

    We will now copy existing template code into this repository, commit, and push the changes up to GitHub.

  6. To copy the required template files in to the current directory by issuing the following command:

    cp -R /home/wasadmin/Student_Templates/lab6/* .
  7. Check that the files have been copied and are in the current folder by using the “tree” command as shown below:

    tree

    You should see the following files:

    .
    ├── MyProject
    │   ├── my_app
    │   │   ├── __init__.py
    │   │   ├── app.py
    │   │   └── message.py
    │   ├── requirements.txt
    │   └── tests
    │   ├── __init__.py
    │   └── test_app.py
    └── pipeline.initial.yaml
    
    3 directories, 7 files
  8. Check the repo status using the following command:

    git status

    Result:

    On branch main
    
    No commits yet
    
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    	MyProject/
    	pipeline.initial.yaml
    nothing added to commit but untracked files present (use "git add" to track)
  9. Stage all new files, and commit them to GitHub by using the following command sequence:

    git add .
    git commit -m "Add initial files"
    git push

    Note

    Your GitHub credentials should be cached from the initial GitHub lab, however If prompted type for your GitHub username (not your email) and use a Personal Access Token (PAT) as the password.

    The result will look similar to the following output below:

git add .
git commit -m "Add initial files"
[main (root-commit) 400cc2b] Add initial files
 7 files changed, 77 insertions(+)
 create mode 100644 MyProject/my_app/__init__.py
 create mode 100644 MyProject/my_app/app.py
 create mode 100644 MyProject/my_app/message.py
 create mode 100644 MyProject/requirements.txt
 create mode 100644 MyProject/tests/__init__.py
 create mode 100644 MyProject/tests/test_app.py
 create mode 100644 pipeline.initial.yaml

git push
Enumerating objects: 11, done.
Counting objects: 100% (11/11), done.
Delta compression using up to 4 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (11/11), 1.56 KiB | 1.56 MiB/s, done.
Total 11 (delta 0), reused 0 (delta 0), pack-reused 0
To https://github.com/<your_github_username>/github-actions-lab6.git
 * [new branch]      main -> main

Tip

Using the “git status” command, you can always check the status of the repo and see staged, and un-staged changes.

We are now ready to configure the repo for a GitHub Actions Workflow

6.3. Setting up Workflow and File Folders in GitHub

In this part, you will learn how to organize your GitHub repository using workflow and file folders. This will help you better manage your project files and streamline your development process.

  1. Using the Existing Terminal session we used earlier, ensure that you are in the “/home/wasadmin/Works/github-actions-lab6” directory (the repo root) using the following command:

    cd /home/wasadmin/Works/github-actions-lab6

    Note

    If you don’t have a Terminal session open, the open a new one and navigate to the “/home/wasadmin/Works/github-actions-lab6” directory.

  2. Create a new folder named "workflows" within a folder called “.github” using the following command:

    mkdir -p .github/workflows

    REMINDER: When we use the “-p” switch with the mkdir command, it means the entire multi-folder path is created.

  3. Use the “tree” command to verify the correct structure exists:

    tree .github

    Result:

    .github
    └── workflows
    
    1 directory, 0 files

    We will now create a GitHub Actions workflow file named “pipeline.yaml” , and view it in the console.

  4. Copy the template file called “pipeline.initial.yaml” to create a new file called “pipeline.yaml” into the new workflows folder you created. Use the following command:

    cp pipeline.initial.yaml .github/workflows/pipeline.yaml
  5. Display the contents of the file using the cat command:

    cat .github/workflows/pipeline.yaml

    The result will be as follows:

    name: Python CI Pipeline
    
    on:
      push:
        branches:
          - main
      pull_request:
        branches:
          - main
    
    jobs:
      build:
        name: Build
        runs-on: ubuntu-latest
    
        steps:
          - name: Checkout code
            uses: actions/checkout@v2
    
          - name: Set up Python
            uses: actions/setup-python@v2
            with:
              python-version: 3.x
    
          - name: Install dependencies
            working-directory: MyProject
            run: |
              python -m pip install --upgrade pip
  6. Now open the YAML file (pipeline.yaml) in Visual Studio Code (VSCode) using the VSCode command-line tool:

    code .github/workflows/pipeline.yaml

    The result is “pipeline.yaml” will be displayed in the VSCode editor as seen below:

    100000010000034F000002F629E1AC85073D75ED

    Have a read through the file (pipeline.yaml) noting that the file is in YAML format, which is recognized by VSCode. Currently, the YAML default of 2 spaces for indentation is being used.

    However, in Python, the default convention is 4 spaces, while YAML maintains the default convention of 2 spaces.

    It’s important to understand that you have the flexibility to choose any spacing you prefer in both Python and YAML, but it is crucial to maintain consistency throughout the entire file.

    Tip

    Using the default conventions is recommended as it aligns with industry standards, the majority of Python and YAML files adhere to the default conventions.

    Demystifying the pipeline file (pipeline.yaml)

    This simple pipeline (pipeline.yaml) is activated on push and pull requests to the main branch, executing the following tasks:

    • Cloning the repository.
    • Run a multi-line inline script

      The workflow is initiated whenever changes are pushed to the "main" branch.

      It consists of a single job named "build" that operates in an Ubuntu environment.

      This job includes multiple steps, each performing specific tasks, executed in the order defined.

      Let’s break down the structure and key elements of the pipeline.yaml file:

      name: Python App CI

      The name field specifies the name of the workflow. In this case, it is set to "Python App CI", which provides a descriptive name for the workflow.

      on:
        push:
          branches: [main]
        pull_request:
          branches: [main]

      The on section defines the events that trigger the workflow. In this case, the workflow will be triggered on both push events and pull request events targeting the main branch. This means that whenever code is pushed to the main branch or a pull request is created or updated for the main branch, this workflow will run.

      jobs:
          build:
              runs-on: ubuntu-latest

      The jobs section defines one or more jobs that make up the workflow. In this case, there is a single job named build. The runs-on field specifies the type of runner that will execute the job. In this example, it is set to ubuntu-latest, indicating that the job will run on an Ubuntu environment.

      steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      The steps section contains a list of individual steps that make up the job. Each step represents a specific action to be performed. In this example, the first step is named "Checkout repository" and it uses the actions/checkout action, which retrieves the repository code for the workflow to work with.

      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.x

      This step is named "Set up Python" and uses the actions/setup-python action. It sets up the Python environment for the subsequent steps to run. The with block is used to provide additional configuration parameters to the action. In this case, it specifies the desired Python version as 3.x.

      the "Install dependencies" step below, is responsible for installing the project’s dependencies.

      - name: Install dependencies
        working-directory: MyProject
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt

      Here is a more detailed explanation for each line does in the Install dependencies step:

    • name: Install dependencies: This line simply provides a name or label for the step to identify it in the pipeline.
    • working-directory: MyProject: This line specifies the working directory for the subsequent commands. It sets the context for where the following commands will be executed. In this case, it sets the working directory to "MyProject", which means the commands will be executed within that directory.
    • run: |: This line indicates that a multi-line command block will follow. The | symbol allows for writing multiple lines of commands as part of the YAML configuration.
    • python -m pip install --upgrade pip: This command upgrades the pip package manager itself to the latest version. It uses the python -m pip syntax to invoke the pip module as a Python script, ensuring that the correct pip version associated with the selected Python environment is used.
    • pip install -r requirements.txt: This command installs the project’s dependencies by reading them from the "requirements.txt" file. The pip install -r command is used to install multiple packages specified in a requirements file. This line assumes that a file named "requirements.txt" exists in the "MyProject" directory, and it installs all the packages listed.

      Note

      In a GitHub Actions Workflow, the working-directory field must be specified for each step individually if you want to change the working directory context. It does not carry over automatically to subsequent steps.

      We are now ready to commit the new files we have made, aka the GitHub Actions Workflow as specified in “main.yaml”

6.4. Commit the workflow changes

In this part, we will commit our new workflow, and push to GitHub, thus triggering the Workflow.

  1. Close VSCode, using File > Exit, being sure to discard any changes you may have made to “pipeline.yaml” while you were reading it.

  2. Issue the following git commands to commit, and push the new workflow to GitHub:

    git status
    git add .
    git commit -m "Add pipeline file"
    git push

    Tip

    Use the git status command, to check the status of the repo before you issue git commands.

    We will now move on to check the status of the Workflow in the GitHub Actions interface.

6.5. Monitor the Workflow Execution

You can observe the execution status of all workflows in your repository at any given time, including monitoring the execution of individual steps within an active runner. When a workflow is being executed in a runner, the workflow output displays the progress of each step, including any possible errors or warnings. If the workflow is executed successfully, it concludes with a green tick next to the corresponding run.

  1. To monitor your workflow execution, navigate to the “Actions” tab in your GitHub repository, as shown in the image below:

    1000000100000418000001AFB94E924CA9040195

    Note

    The URL will be in the following format:
    https://github.com/<your_github_username>/github-actions-lab6/actions

    If you catch the update in time, you might see the following image, which shows the workflow in progress:

    10000001000002BC000000AE530C530B21213051

    If you only see “red” run entries, it means there might be a mistake in your YAML file. Please ensure that it matches the example provided.

    If the YAML syntax is correct, a “green” checkmark will appear next to the most recent event (or workflow run event) entry, which indicates a successful execution of the workflow.

    We can now move on, and update the pipeline workflow (pipeline.yaml) to trigger another run.

  2. Before continuing, ensure that there has been a successful run. The workflow run will be titled “”Add pipeline file”, and it will have a “green” checkmark indicating success.

6.6. Add a test to the pipeline

Lets change the pipeline workflow to add a new step to the job which will run an existing unit test.

Before we change the “pipeline.yaml”, lets first update the “requirements.txt” to add in the “pytest==7.3.2” dependency, so we can use “pytest” to run the unit test as part of the updated pipeline.

  1. Using the Existing Terminal session, issue the following command to open the “requirements.txt” file into VSCode:

    code MyProject/requirements.txt

    The open “requirements.txt” file will contain only the “Flask” dependency

  2. Add the new dependency entry to the file by appending the following text on a newline, after the “Flask” entry:

    pytest==7.3.2

    The updated “requirements.txt” file should now contain both the “Flask”, and “pytest==7.3.2” dependencies.

  3. Save the requirements.txt file using File > Save, and Exit VSCode using File > Exit

  4. In a New or Existing Terminal session (if you have not closed the previous Terminal session) ensure you are in the “/home/wasadmin/Works/github-actions-lab6” directory, by running the following command:

    cd /home/wasadmin/Works/github-actions-lab6
  5. Open the “pipeline.yaml” using the VSCode command-line shortcut:

    code .github/workflows/pipeline.yaml
  6. Now, modify the existing YAML code to add a newline, then append a new step after the newline (should be around line 31), using the following YAML.

    - name: Run unit tests using pytest
      working-directory: MyProject
      id: unit-tests
      run: pytest
  7. Double check your edits to make sure the indentation is correct as shown in the image below:

    1000000100000351000002F6B6C9AF10F7F670E3
  8. Save the changes using File > Save, and exit VSCode using File > Exit

  9. In your Existing Terminal session, commit, and push the changes to GitHub:

    git status
    git add .
    git commit -m "Updated pipeline to include unit test"
    git push
  10. Check out the updated workflow in the GitHub Actions interface as we did before, and see that there will be a newly logged run.

    The result will be similar to the following image below:

    100000010000041A0000020AC576F5E048BE5F6B

    As seen in the image above, there is a new workflow run logged titled with the commit message of the last commit.

  11. Before continuing, ensure that there has been a successful run.

  12. Click on the Workflow run titled “Updated pipeline to include unit test”, or the latest one if you have made other commit:

    10000001000001C600000103024703E15AE63BDA

    You will then be taken to the details page of the run you have just clicked on.

    100000010000041B000002B72A1FA8BE4CDF923A
  13. Locate the “pipeline.yaml” “on:push” event, and click on the “build” job as seen above.

    The build (job) output will be displayed, and the result will be similar to the following image below:

    1000000100000404000002EC7CF7E2502485C3BD
  14. Expand the “Build” job’s step labeled “Run unit tests using pytest” to see the result of the test step which we added to the pipeline:

    The result will be as follows:

    100000010000029F0000010534F891AD56A79578

6.7. Adding a second job to the pipeline

In this part we will be adding a new job to the pipeline. The purpose of this job is to deploy the app, after the build job is complete.

It won’t actually deploy the app, it is just a placeholder to demonstrate what a multi-job GitHub Actions workflow YAML file looks like, for you to experience a multi-job pipeline in action.

  1. Open a New or Existing Terminal session.

  2. Confirm you are in the “/home/wasadmin/Works/github-actions-lab6” directory

    cd /home/wasadmin/Works/github-actions-lab6
  3. Open “pipeline.yaml” in VSCode using the command-line shortcut

    code .github/workflows/pipeline.yaml
  4. Append a new entry to the “Run unit tests using pytest” step using the following YAML snippet:

    continue-on-error: true

    The updated job, will be as follows:

    10000001000001D700000091AF815723AB0EB1C0
  5. Next, continue to edit “pipeline.yaml” appending a new line to nicely separate the existing jobs (around line 36).

  6. Then (around line 37) append a new Job named “Deploy to Test Environment” by copying and pasting this YAML below:

    deploy:
        name: Deploy to Test Environment
        needs: build
        if: ${{ needs.build.result == 'success' }}
        runs-on: ubuntu-latest
    
        steps:
          - name: Checkout code
            uses: actions/checkout@v2
    
          - name: Set up Python
            uses: actions/setup-python@v2
            with:
              python-version: 3.x
    
          - name: Deploy to test environment
            run: |
              # Add your deployment steps here
              echo "Deploying to test environment..."

    The resulting updated “pipeline.yaml” will look exactly like the following image:

    100000010000024F000003E93DBD437340B499D8
  7. Validate that your “pipeline.yaml” file is exactly the same, and the indentations are aligned. Then Save the “pipeline.yaml” file using File > Save, and exit VSCode using File > Exit.

  8. In your Terminal session, Stage, Commit, and Push the changes to GitHub using the following commands:

    git status
    git add .
    git commit -m "Added deploy job"
    git push
  9. Go to GitHub Actions tab (as we have done before) and review the status of the latest commit.

    If you get to the GitHub Actions page in time, you might see the queued job as seen in the example below:

    10000001000002C9000001D9DF2F61613D7A49DF

    Tip

    If you get a failed run, by default GitHub will email you a failed run message with details of the run id (hash) using the email assigned to your GitHub account.

  10. Wait for the job to complete.

  11. Click on the job row labeled “Added deploy job” to see the jobs and their individual status as they complete in accordance with the declared workflow.

    Looking at this image below, we can see that once we drill down into the pipeline that each job is separated out as seen below:

    10000001000003F90000017B77CDA8D16B6EA902

    Each stage (job) of the pipeline works through the steps assigned to that job as defined in the workflow the YAML (in our lab, this is pipeline.yaml).

    Once the entire workflow is complete aka our pipeline, we will see that each stage (job) has it’s own green tock indicating that the job is complete.

  12. Click on the “Deploy to Test Environment” job, to drill down in to the details as indicated below:

    10000001000003F500000101CA8EB88917E3443D
  13. Expand the step labeled “Deploy to test environment” to see the sample placeholder message that is indicating that the deployment is being made to the test environment.

    10000001000002B30000027F75FB52F92C47BAF0

    Obviously, this deployment stage (job) is just a placeholder to demonstrate a fictitious deployment to an environment only if the unit tests complete and are successful.

    Congratulations! You have successfully created a multi-job GitHub Actions pipeline to build, test, and deploy your Python project.

    Note

    This pipeline could be further tailored and enhanced to meet the specific requirements of the application’s build/deploy process. In a later lab, we will produce a complete end-to-end example.

6.8. Clean Up

  1. Close all open Terminal sessions.

  2. Close all open Web Browser windows.

  3. Close all open VSCode files & windows.

6.9. Summary

In this lab, we focused on setting up a workflow that defines a multi-job pipeline for a Python Flask app. The pipeline includes two main jobs: "Build" and "Deploy to Test Environment."

Here’s an overview of what we accomplished:

  • Lab Setup:
    • We created a new repository in a personal GitHub account to hold our project.
  • Cloned the Repository Locally:
    • We cloned the newly created repository to our local Lab VM to work on the files.
  • Configured the GitHub Actions Workflow:
    • We organized our repository by creating a ".github/workflows" folder with the pipeline workflow file "pipeline.yaml" inside it.
    • We configured the YAML file to define the steps for the "Build" job, including installing dependencies and running unit tests using pytest.
  • Monitored the Workflow Execution:
    • We used the GitHub Actions interface to observe the workflow execution and see the status of each step.
  • Added a Test to the Pipeline:
    • We updated the pipeline to include a new step that runs existing unit tests.
    • We also updated the "requirements.txt" file to include the "pytest" dependency.
  • Added a Second Job to the Pipeline:
    • We extended the pipeline by adding a new job named "Deploy to Test Environment."
    • Though it was a placeholder, it demonstrated the concept of multi-job pipelines.
  • By following these steps, we gained hands-on experience in setting up a simple multi-job pipeline with GitHub Actions, enabling automation and smoother development processes.

A Simple Flask Microservice

Module 7

This lab focuses on understanding path-based routing in microservices using Flask.

Path-based routing allows us to handle different endpoints and functionalities within a microservice by associating routes with corresponding functions.

By learning about routes and path-based routing, you’ll experience how APIs in microservices and handle different requests based on paths (routes).

We’ll gradually build a Flask microservice with routes such as "/hello-message," "goodbye-message," and a dynamic route "/custom-message/<name>."

By the end of this lab, you’ll have a practical understanding of creating a basic Flask microservice, defining routes, and using path-based routing to handle different functionalities.

7.1. Creating a new Project Directory

We will be using Visual Studio Code (VSCode) for the code editing during this lab, including the use VSCode inbuilt Terminal Session Interface to demonstrate different aspect of the tools available in VSCode.

  1. Open a new terminal session.

  2. Create a new directory for your project using the following command:

    mkdir -p /home/wasadmin/Works/lab7/MyProject
  3. Navigate to the newly created directory “/home/wasadmin/Works/lab7/MyProject”:

    cd /home/wasadmin/Works/lab7/MyProject
  4. Confirm you are in the “/home/wasadmin/Works/lab7/MyProject” directory using the Print Working Directory (pwd) command:

    pwd

    Result:

    /home/wasadmin/Works/lab7/MyProject
  5. Launch VSCode using the current directory for context, by using the VSCode command-line shortcut as follows:

    code .

    Note

    Do not forget the period “.” after the code command e.g. “code .”.

    If you are prompted with an “Authentication required” pop-up window, use “wasadmin” as the password.

  6. VSCode will launch, and open the folder and set the working context to be the “/home/wasadmin/Works/lab7/MyProject” as seen in the image below:

    10000001000007B2000004AA97A6E43EA44B3385

    Note

    In the diagram above, you can use the restore window option if you wish to not have the VSCode window take up the entire Desktop.

    Tip

    If the welcome page still shows then uncheck the “Show Welcome page on startup” option located at the bottom of the default Welcome page, and then close the “Welcome page” using the small cross to the right of the Welcome page tab as seen in the image above.

  7. Create a new file called “app.py” in the project root using the VSCode many File > New or click on the “New File” icon as shown in the two images below:

    10000001000001D9000001F1CB28587AADBEE7DD
    10000001000001C5000000D7B45F3CD1655D9E80
  8. Copy and Past the following Python code into “app.py”:

    from flask import Flask, jsonify
    
    app = Flask(__name__)
    
    @app.route('/hello-message')
    def hello_message():
        message = {'message': 'Hello!'}
        return jsonify(message)
    
    if __name__ == '__main__':
        app.run(host='0.0.0.0', port=8080)

    This code creates a basic Flask application with a single route “/hello-message” that returns a JSON response with a greeting message.

    Note

    To save time and avoid distracting you with the intricacies of robust error handling in enterprise-grade applications, the code provided here focuses solely on illustrating the concepts of path-based routing.

  9. Verify that the code looks exactly the same as the image below, making sure that all the indentation is correct as seen below

    100000010000044A0000038FD7C56539FD59BE2B

    Note

    In VSCode, the file extension (e.g., ".py") will be automatically detected, and the default indentation will use spaces instead of tab characters. Specifically, the "tab key" will insert 4 spaces for indentation because the code is Python. The standard practice in modern editors is spaces are preferred over tab characters.

  10. Save the new “app.py” using the VSCode main menu option File > Save, but do not close/exit the VSCode window, keep it open.

7.2. Test the initial App

In this part of the lab, we will launch the integrated terminal within VSCode and execute commands directly on the command line without having to switch to a separate terminal application. This feature is a productivity enhancement commonly utilized by developers.

As builders, gaining familiarity with the tools and workflows employed by developers can greatly enhance our understanding and collaboration in software development projects.

  1. Open a new VSCode Integrated Terminal session, by using the menu option Terminal > New Terminal, as shown in the image below:

    1000000100000401000001F1181A163A982370FD

    Tip

    Be aware, that when using the VSCode Integrated Terminal, the experience can change depending on the extensions installed.

    The Python extension has been installed already for VSCode, so your terminal session, might load a previous “pyenv” that has been configured from another lab.

    If this happens, then you can use the VSCode Preference > Settings menu, and locate the setting “Python > Terminal: Activate in Current Terminal” and set it to false. The json fo r this setting is "python.terminal.activateEnvironment": false

    You can also search for this setting by typing “python.terminal.activateEnvironment” in the setting search bar.

    Productivity Tip: Once you have opened a new terminal in VSCode, you can maximize the VSCode window and adjust the windows within the interface as needed to maximize your available screen space. This will provide you with a larger working area and allow you to focus on your code and terminal output more efficiently.

  2. In the new VSCode terminal pane located underneath the code, type the tree command to confirm that “app.py” exists as follows:

    tree

    The result will be as shown in the image below:

    10000001000003E2000001CB0504AE3BE500697F

    To run the Flask application, we will need to install the flask dependency.

  3. Run the Python Installer manually using the “pip” command as follows:

    pip install Flask

    The result is the latest Flask dependency is now installed as seen below. Message may vary on time and version.

  4. To run the Flask application, issue the following command to the python interpreter.

    python app.py +

    The Flask Development Server based upon version installed will start on the app, and listen on port 8080. The output will look similar to the following screen:

    10000001000007080000043A1D3062486C066FE5
  5. Open a new browser session using the “Chrome” icon either in the main Ubuntu “Desktop Launcher” or using the main Ubuntu “Applications” menu. Be aware, you might have the VSCode window maximized, so to see the Desktop Launcher you can restore the VSCode window to see the “Desktop Launcher”.

    Note

    You can maximize, and restore the VSCode window using the standard maximize/restore icon in the top-right-hand corner of the VSCode window.

  6. Navigate to the running Flask Application’s default-page in the open browser window using the following URL:
    http://localhost:8080/hello-message

    The result will be the “Javascript Object Notation Format (JSON)” formatted-output of the route “/hello-message” as seen below:

    100000010000017C00000082ADE4D390AF872BBC

    Note

    In this application the default Flask port of 5000 is not used. The code specifies the app should listen on port 8080.

    In the next part, we will expand the applications code-base to add additional more path-based routes.

  7. Stop the Flask Application, by using the key combination CTRL-C in the VSCode terminal session, then close the Integrate Terminal session to give yourself more screen space before you edit the code as seen below:

    10000001000006E20000046A675F7EC492B06290
  8. Edit “app.py” file in VSCode, add a newline after line 8 and append the following code, by copying and pasting the code below after the new line (this should be around line 10).

    @app.route('/goodbye-message')
    def goodbye_message():
        message = {'message': 'Goodbye!'}
        return jsonify(message)
    
    @app.route('/custom-message/<name>')
    def custom_message(name):
        message = {'message': f'Hello, {name}!'}
        return jsonify(message)

    The result will look exactly as below:

    10000001000002BE000002A69FC68BD0B7171887
  9. Verify the code is the same as above, and the indentations are correct, then Save the file using File > Save.

  10. Either use the existing VSCode terminal session, or open a new VSCode Integrated Terminal Session, using Terminal > New Terminal in the main VSCode menu.

  11. Inside the terminal session, type the following command to start the Flask Application again as we did earlier.

    python app.py

    With the updated code, and the Flask Application (app.py) running again, let’s test the new routes.

  12. Either use an existing Chrome browser session, or open a new Chrome browser session as we did previous in the lab, then navigate to the following URL’s one by one.
    http://localhost:8080/hello-message

    Returns the JSON response with "Hello!" message as seen below:

    10000001000001640000007D24A698E412629E12

    http://localhost:8080/goodbye-message

    Returns the JSON response with "Goodbye!" message as seen below:

    100000010000019600000080F84579BB2762E343

    http://localhost:8080/custom-message/<yourname>;

    Note

    Replace the "<yourname>" placeholder with your desired name. For example, if you want to greet "Earnest," the URL would be “http://localhost:8080/custom-message/Earnest”. Each route will return a JSON response with the corresponding message.

    Here is an example using the string “Bob” as the parameter:
    http://localhost:8080/custom-message/Bob

    The result is as per the image below:

    10000001000001910000007E064275FED9873987

    You have now successfully seen a Flask application produce multiple endpoints using simple path-based routing.

7.3. Subfolder Path Routes

In addition to the previous routes, we will now add an example of using “subfolder” path-routes to our Flask microservice. Subfolder path-routes allow us to organize the endpoints into hierarchical structures. This is typical of APIs.

  1. Using VSCode, edit app.py, add a new line after the previous routes (should be about line 19), and append the following code snippets by copying and pasting on line 20:

    @app.route('/api/v1/users')
    def get_users():
        users = [
            {'id': 1, 'name': 'John'},
            {'id': 2, 'name': 'Jane'},
            {'id': 3, 'name': 'Alice'}
        ]
        return jsonify(users)
    
    @app.route('/api/v1/posts')
    def get_posts():
        posts = [
            {'id': 1, 'title': 'First Post'},
            {'id': 2, 'title': 'Second Post'},
            {'id': 3, 'title': 'Third Post'}
        ]
        return jsonify(posts)
  2. Verify the code looks exactly as seen below:

    100000010000020A00000339D309CF6CDAE7F24B
  3. In VSCode, save “app.py” using File > Save in the VSCode menu:

    Before we run the code, lets quickly discuss the code we have just added.

    In this updated code, we have introduced two new subfolder path routes: “/api/v1/users” and /api/v1/posts. These routes are nested within the “/api/v1” path, creating a logical grouping for related functionalities.

    The “/api/v1/users” route returns a JSON response with a list of users, while the “/api/v1/posts” route returns a JSON response with a list of posts.

    You can access these subfolder path routes by appending the corresponding paths to the base URL of your Flask application. For example:

    To retrieve the list of users: http://localhost:8080/api/v1/users

    To retrieve the list of posts: http://localhost:8080/api/v1/posts

    By organizing your API endpoints into subfolder path routes, you can achieve a more structured and intuitive API design. This allows for better separation of concerns and scalability as your microservice grows.

  4. As before, close the currently running Flask app using CTRL-C in the Terminal and re-launch the updated Flask application using the following command:

    python app.py
  5. Either use an Existing Chrome browser session that is already open, or Open a new Chrome browser session.

  6. Open the following URLs one by one in your existing Chrome Browser window to verify the code is working as intended:

  7. To retrieve the list of users, use the following URL:
    http://localhost:8080/api/v1/users

    The JSON response will be as follows:

    100000010000027F0000008476F1C4F5EFF27426
  8. To retrieve the list of posts (blog entries), use the following URL:
    http://localhost:8080/api/v1/posts

    The JSON response will be as follows:

    10000001000002FB0000008E20D81EAF5810BE51

    Congratulations, the Lab is complete.

7.4. Clean Up

  1. Stop the Flask Application using CTRL-C in the VSCode Integrated Terminal where the app is running.

  2. Close all Terminal sessions.

  3. Close VSCode, using File > Exit.

  4. Close all open Web Browser windows.

7.5. Summary

In this lab, we focused on understanding the concept of path-based routing in microservices by creating a simple Flask application. Path-based routing allows us to handle different endpoints and functionalities within a microservice by associating specific routes with corresponding functions.

We started by creating a basic Flask microservice with a single route, /’hello-message”, which returned a JSON response with a greeting message. We then expanded the microservice by adding additional routes, such as “/goodbye-message” and a dynamic route “/custom-message/<name>”.

Afterwards, we explored more advanced examples by introducing subfolder path routes. These routes allowed us to organize our API endpoints into hierarchical structures. We added “/api/v1/users” and “/api/v1/posts” routes as examples of subfolder path routes, each returning JSON responses with relevant data.

Throughout the lab, we leveraged the VSCode editor and its integrated terminal to enhance our productivity as developers. We tested the Flask application using a web browser and observed the JSON responses generated by each route.

By completing this lab, we gained a practical understanding of creating a Flask microservice, defining routes, and utilizing path-based routing to handle different functionalities. We also learned about organizing endpoints into subfolder path routes for better API design.

This knowledge provides a strong bases of some key concepts used when building scalable microservices and developing APIs within the Flask framework.

Introduction to REST Verbs

Module 8

In this lab, you will learn the basics of REST (Representational State Transfer) verbs through implementation in a Flask application.

REST is an architectural style for designing networked applications, and RESTful APIs are commonly used for building web services.

Here, you will create a simple Flask application that manages a collection of books and implement different REST verbs (GET, POST, PUT and DELETE) to perform CRUD (Create, Read, Update, Delete) operations on the books.

8.1. Create a new project directory and virtual environment

We will be using the Ubuntu terminal to issue commands, and Visual Studio Code (VSCode) for the code editing during this lab.

  1. Open a new terminal session using the “Ubuntu Desktop Launcher” or Ubuntu “Applications” menu.

  2. Create a new working folder called “/home/wasadmin/Works/lab8” and navigate to the folder by issuing the following commands:

    mkdir -p /home/wasadmin/Works/lab8
    cd /home/wasadmin/Works/lab8
  3. Confirm you are in the “/home/wasadmin/Works/lab8” directory by using the “pwd” command as follows:

    pwd

    Result:

    /home/wasadmin/Works/lab8
  4. Execute the following commands to create a new directory for your project and set up a virtual environment by issuing the 4 commands below:

    mkdir book-service
    cd book-service
    python3 -m venv .venv
    source .venv/bin/activate

    The result will be a new directory called book-service, and an activated virtual environment named “.venv” as seen below:

    10000001000001BA00000057DC730BFDCBF09B5B

    What did we just do?

  5. “mkdir book-service”:

    This command creates a new directory named "book-service" in the current working directory. The directory will be used to organize your project files.

  6. “cd book-service”:

    This command changes the current working directory to "book-service". It allows you to navigate into the newly created directory.

  7. “python3 -m venv .venv”:

    This command creates a new virtual environment named ".venv" within the "book-service" directory. A virtual environment is an isolated Python environment that allows you to install packages and dependencies specific to your project without interfering with your system-wide Python installation.

    “python3” invokes the Python 3 interpreter.

    “-m venv” is a module in Python’s standard library used to create virtual environments.

    “.venv” is the name of the virtual environment. You can choose any name you like, but ".venv" is a common convention, and since it begins with a period “.” it is often excluded from being committed to git repos when using typical .gitignore files.

  8. “source .venv/bin/activate”:

    This command activates the virtual environment. When the virtual environment is activated, any subsequent Python-related commands will use the packages and dependencies installed within the virtual environment instead of the global system-wide Python environment.

    “source” is a command used to execute the script provided as an argument within the current shell environment.

    “.venv/bin/activate” is the script that activates the virtual environment by modifying the PATH and other environment variables.

    After running these commands, you will have a new directory "book-service" with a virtual environment set up inside it.

    You can install dependencies and run your Flask application within the virtual environment, isolating this app from other Python projects.

    Now, lets get back to the tasks at hand.

8.2. Installing necessary dependencies

  1. In the book-service directory, create a new file named requirements.txt using the touch command, which will create a new empty file.

    touch requirements.txt
  2. Open the “requirements.txt” file in VSCode using the command-line short-cut:

    code requirements.txt
  3. Using VSCode, edit the “requirements.txt” file by adding the following line:

    Flask
  4. Save the “requirements.txt” file, using File > Save, and Exit VSCode using File > Exit.

  5. Install the dependencies using pip:

    pip install -r requirements.txt
  6. To create the “app.py” file, we will copy an existing template file called “app_initial.py” from the “/home/wasadmin/Student_Templates/lab8” directory using the following command:

    cp /home/wasadmin/Student_Templates/lab8/app_initial.py app.py

    Verify the current structure of the “book-service” folder, to ensure the “app.py” file exists using the tree command:

    tree

    The result should resemble the output below:

    .
    ├── app.py
    └── requirements.txt
    
    0 directories, 2 files
  7. Open the “app.py” file in VSCode using the following command-line shortcut:

    code app.py

    Once the “app.py” file is opened in VSCode, have a read through the file.

    Explanation of “app.py”

    The code sets up a Flask application with routes for different REST verb.

    The “books” list represents an in-memory collection of books (in a real application, you would use a database instead).

    Locate the routes and their corresponding functions as follows:

    @app.route('/books', methods=['GET'])

    Returns the list of all books as a JSON response.

    @app.route('/books', methods=['POST'])

    Adds a new book to the collection based on the data provided in the request body and returns the created book as a JSON response.

    @app.route('/books/<int:book_id>', methods=['GET'])

    Returns the details of a specific book based on its ID as a JSON response.

    @app.route('/books/<int:book_id>', methods=['PUT'])

    Updates the details of a specific book based on its ID using the data provided in the request body and returns the updated book as a JSON response.

    @app.route('/books/<int:book_id>', methods=['DELETE'])

    Deletes a specific book based on its ID and returns a JSON response indicating success.

    It is not required that you understand the code complete, just appreciate that it will provide endpoints that response to the implemented REST Verbs (GET, POST, PUT, DELETE).

  8. Close the “app.py” file using the VSCode menu option: File > Exit.

8.3. Create a Data file

In many applications it is essential to store and manage data persistently. One common approach is to use data files which are files that contain structured data in a specific format. In this lab, we will explore the creation of a data file called “books.json” to store and manage a collection of books in JSON format.

JSON (JavaScript Object Notation) is a lightweight data interchange format that is widely used for representing structured data. It provides a human-readable and easy-to-parse syntax, making it a popular choice for data storage and transfer.

By creating a “books.json” file, we can store the book data in a structured manner, allowing us to easily read, update, and manipulate the collection of books. We will leverage this file in a Flask application to manage the book data using RESTful APIs.

Let’s try this out using the steps below:

  1. Using the exact same terminal which was used to launch “app.py into VSCode, ensure you are still in the correct directory “/home/wasadmin/Works/lab8/book-service” buy using he pwd command:

    pwd

    Result:

    /home/wasadmin/Works/lab8/book-service
  2. Copy the template “/home/wasadmin/Student_Templates/lab8/books_initial.json books.json” file to create a pre-populated file called books.json using the following command:

    cp /home/wasadmin/Student_Templates/lab8/books_initial.json books.json
  3. Ensure that “books.json” has been copied, and contains the sample data as follows:

    cat books.json

    The, resulting JSON output will be as follows:

    [
        {
            "id": 1,
            "title": "The Great Gatsby",
            "author": "F. Scott Fitzgerald",
            "genre": "Fiction"
        },
        {
            "id": 2,
            "title": "To Kill a Mockingbird",
            "author": "Harper Lee",
            "genre": "Fiction"
        },
        {
            "id": 3,
            "title": "1984",
            "author": "George Orwell",
            "genre": "Fiction"
        }
    ]

    Explanation of “app.py” and it’s use of “books.json”

    The “books.json” file is used to store the data of the books in JSON format. It serves as a persistent storage for the book collection managed by the Flask application (app.py), below is a brief overview of it’s use:

    • Loading Initial Book Data:
      • When the Flask application starts, the “books.json” file is loaded to initialize the “books” list (data type is a list) with the initial book data. The data in the JSON file represents the book collection that the application will work with. This allows the application to have pre-defined books in the collection when it starts.
    • Retrieving Book Data:
      • The “get_books()” and “get_book(book_id)” functions retrieve book data from the books list. The data is returned as a JSON response to the client making the request. By loading the book data from the “books.json” file at the beginning, the application has access to the current state of the book collection.
    • Adding Books:
      • When a new book is added using the “add_book()” function, the book data is extracted from the request body and a new book object is created. The new book is then appended to the “books” list. After adding the book to the list, the updated book data is saved back to the “books.json” file. This ensures that the book collection is persisted and the newly added book is included in subsequent requests.
    • Updating Books:
      • The “update_book(book_id)” function updates the details of a specific book based on its ID. The updated book data is extracted from the request body and the corresponding book in the “books” list is updated. After updating the book data, the changes are saved back to the “books.json” file. This ensures that the updated book information is persisted for future requests.
    • Deleting Books:
      • The “delete_book(book_id)” function deletes a specific book based on its ID. The corresponding book is removed from the “books” list, and the updated book collection is saved back to the “books.json” file. This ensures that the deleted book is permanently removed from the collection.

        By using the “books.json” file as a persistent storage, the Flask application can maintain the state of the book collection across multiple runs.

        The file allows the application to load the initial book data, save updates, and retrieve the current book data whenever needed.

        Now lets move on, and test the applications functionality using calls to the REST endpoints from the command-line.

8.4. Using cURL to Interact with the Application

In this section, we will explore how to interact with the Flask application by using the command-line tool cURL. cURL is a versatile command-line tool for making HTTP requests and is particularly useful for testing and interacting with web services.

By using cURL, we can send various HTTP requests to different endpoints of our application and observe the responses. This allows us to test the functionality of the application and verify if it behaves as expected.

We will demonstrate the usage of cURL for different REST verbs, such as GET, POST, PUT, and DELETE, to interact with the book service application.

You will then see how to make requests to retrieve the list of books, add a new book, update an existing book, and delete a book.

Let’s dive into the examples and explore how cURL can be used to interact with the application!

  1. Start the application by issuing the following command in your existing open terminal session:

    python app.py

    Note

    To interact with the application, you can use a command-line tool like “curl” or possible consider a popular GUI tool such as “Postman”.

    In these example, we are using a command-line approach utilizing the “curl” command.

    To retrieve the list of all books, we can send a GET request to: “http://localhost:5000/books”.

    The next few steps, will guide us how to do this.

  2. Leaving the application running, launch another New (separate) Terminal session using the open-terminal’s menu system-option File > New Window as shown below:

    100000010000022E000000A850D46901B7558023
  3. In the new terminal session window use the “curl” command to test the GET request as follows:

    curl -X GET http://localhost:5000/books/1

    The result, will be a call to the applications endpoint using a GET request, and return the following output of the book item that has an “id” field equal to 1:

    {"author":"F. Scott Fitzgerald","genre":"Fiction","id":1,"title":"The Great Gatsby"}
  4. To add a new book with using the “curl” command, execute the following command:

    curl -X POST -H "Content-Type: application/json" -d '{"title": "The Catcher in the Rye", "author": "J.D. Salinger", "genre": "Fiction"}' http://localhost:5000/books

    Tip

    The line is a single line. To make things easier, use a text editor to stage the command first, then copy and paste the entire command into the terminal.

  5. The result below, will be the following response from the endpoint:

    {"author":"J.D. Salinger","genre":"Fiction","id":4,"title":"The Catcher in the Rye"}

    Explanation of what just happened:

    The command above, sends a POST request to the “http://localhost:5000/books” endpoint with the JSON data representing the new book in the request body.

    The “-X POST” flag specifies the HTTP verb as POST, and the '-H "Content-Type: application/json' flag sets the request header to indicate that the content is in JSON format.

    The “-d” flag followed by the JSON data enclosed in single quotes represents the request body.

    The JSON data sent in the request body corresponds to the book details:

    {
        "title": "The Catcher in the Rye",
        "author": "J.D. Salinger",
        "genre": "Fiction"
    }

    Executing the “curl” command will add a new book with the specified details and an Id of 4 to the book collection. The reason the Id number equals 4, is because in the “app.py” code, the add_book() function calculates how may existing book items exist currently in “books.json”, and increments the count by 1.

  6. Issue the following “curl” command to get the latest list of books:

    curl -X GET http://localhost:5000/books

    Resulting json response is as follows:

    [{"author":"F. Scott Fitzgerald","genre":"Fiction","id":1,"title":"The Great Gatsby"},{"author":"Harper Lee","genre":"Fiction","id":2,"title":"To Kill a Mockingbird"},{"author":"George Orwell","genre":"Fiction","id":3,"title":"1984"},{"author":"J.D. Salinger","genre":"Fiction","id":4,"title":"The Catcher in the Rye"}]

    Note

    The data displayed as a result of the GET request is not formatted for human readability as it serves as a raw data storage.
    The primary purpose of the “books.json” file is to store and manage the book data in a structured manner for backend operations.
    While the content may not be visually pleasing or easily readable to humans, it is designed to be consumed and utilized by other backend microservices or applications built to handle and present the data in a more user-friendly format.

8.5. Updating Data using PUT

In the previous sections, we explored how to use the GET and POST methods to retrieve and add data to our application. Now, we will focus on the PUT method, which allows us to update existing data.

The PUT method is commonly used in RESTful APIs to modify an existing resource. In our case, we will use it to update the details of a specific book in the book collection.

By sending a PUT request to the appropriate endpoint, we can specify the book’s ID and provide the updated information in the request body. The application will then locate the book with the matching ID and apply the updates accordingly.

In the upcoming examples, we will use cURL to demonstrate how to send PUT requests to the application. You will learn how to update the details of a book by specifying the book’s ID and providing the new data in the request body.

Let’s proceed to the examples and discover how to use the PUT method to update existing data in our application.

Issue an update using PUT:

To update the details of a specific book, you would send a PUT request to “http://localhost:5000/books/<book_id>”, where “<book_id>” is the ID of the book you want to update.

  1. Update the book with ID 1 by issuing the following “curl” command:

    curl -X PUT -H "Content-Type: application/json" -d '{"title": "New Title", "author": "New Author", "genre": "New Genre"}' http://localhost:5000/books/1

    The resulting output from the response of the request wil be as follows:

    {"author":"New Author","genre":"New Genre","id":1,"title":"New Title"}

    What just happened?

    The command used above, sends a PUT request to the “http://localhost:5000/books/1`”endpoint with the specified JSON body containing the updated book details.

    The response will include the updated details of the book in JSON format if it exists.

  2. To list the book with ID 1, issue the following curl command:

    curl -X GET http://localhost:5000/books/1

    The result will be this response:

    {"author":"New Author","genre":"New Genre","id":1,"title":"New Title"}

    We can can see that the book item with id=1 has been updated.

    We will now look to delete this record from the list.

8.6. Deleting Data using DELETE

In addition to retrieving and modifying data, it is often necessary to remove specific resources from our application. The DELETE method allows us to perform this operation by sending a request to the corresponding endpoint.

In this application, to delete a specific book from our book collection, we can send a DELETE request to the “http://localhost:5000/books/<book_id>” endpoint, where “<book_id>” represents the ID of the book we want to delete.

For example, if we wish to delete the book with ID 1, we can use the following curl command-syntax:

curl -X DELETE http://localhost:5000/books/1

By, executing this command we send a DELETE request to the “http://localhost:5000/books/1” endpoint, triggering the deletion of the book with ID 1.

The response we receive will be a simple message in JSON format and will indicate the success of the deletion.

Let’s proceed to use the DELETE method to remove a specific book from our application.

  1. In the same terminal we have been using to issue previous curl commands, delete the book with ID 1, by issuing the following curl command:

    curl -X DELETE http://localhost:5000/books/1

    The resulting response message displayed will as follows:

    {"message":"Book deleted"}
  2. Check the result using the following Get command:

    curl -X GET http://localhost:5000/books
  3. The resulting output showing that the book with id=1 has been removed from the book list as seen below:

    [{"author":"Harper Lee","genre":"Fiction","id":2,"title":"To Kill a Mockingbird"},{"author":"George Orwell","genre":"Fiction","id":3,"title":"1984"},{"author":"J.D. Salinger","genre":"Fiction","id":4,"title":"The Catcher in the Rye"}]

    This concludes this lab.

8.7. Clean Up

  1. Stop the running Flask Application using CTRL-C in the terminal where the Flask App (app.py) is running.

  2. Close all open Terminal sessions.

  3. Close all open Web Browser windows.

  4. Close all open VSCode files & windows.

8.8. Summary

In this lab, we explored the fundamentals of implementing REST verbs (GET, POST, PUT, DELETE) in a Flask application. We created a book-service Flask application that allows us to manage a collection of books using different RESTful operations.

Here’s a recap of what we covered in this lab:

  • Setting up the Environment:
    • We created a new directory for our project and set up a virtual environment using Python’s “venv” module to isolate our project dependencies.
  • Installing Dependencies:
    • We installed Flask, a lightweight web framework, as the main dependency for our application which allowed us to quickly assemble a RESTful API.
  • Creating the Flask Application:
    • We used a simple Flask application that included routes for handling various REST verbs.
    • The application enabled us to perform CRUD (Create, Read, Update and Delete) operations on our book collection, including adding new books, retrieving book details, updating book information, and deleting books.
  • Interacting with the Application:
    • We used cURL, a command-line tool, to interact with our Flask application. By executing cURL commands, we were able to send HTTP requests and observe the corresponding responses. This allowed us to test the functionality of our application and verify if it behaved as expected.

By mastering these concepts, you have gained a solid foundation of the fundamentals employed by developers in the creating RESTful APIs using Flask.

GitHub Actions – End to End Pipeline

Module 9

In this lab you will learn how to set up a staged pipeline using GitHub Actions, incorporate secrets for secure access, and perform tasks such as building, testing, and pushing Docker images.

To accomplish this, you will start by creating a new GitHub repository and setting up a Docker Hub account. Next, you’ll add secrets to enhance security, clone the repository, and finally, configure a staged pipeline within GitHub Actions.

The pipeline workflow will include steps for code checkout, Docker setup, Docker Hub login, image building, pushing, and testing.

By the end of this lab, you will have a comprehensive pipeline that automates the process of building and deploying Docker images, enabling efficient software delivery.

9.1. Lab setup

In this part we will create a New Repository in your own GitHub account.

Before starting this lab, we need to make sure you have an active GitHub account and Personal Access Token (PAT) which are created in a previous lab. If you have not yet created a GitHub account or PAT, then do so before continuing.

  1. Log into your personal GitHub account which you are using for this lab.

    Note

    We recommend that for labs involving GitHub you use a personal GitHub account as to not conflict with the organization you work for.

  2. Navigate to the main "Dashboard” (https://github.com/dashboard) of your GitHub account. You do this by clicking on the Dashboard link in the main top navigation toolbar as seen below:

    10000001000001390000008DC187E1CE716D65DA

    or, by using the left-hand-side hamburger menu as seen below, and selecting “Home”.

    100000010000014E000001199DD35E1BA9EBB03C
  3. Either use the global quick toolbar "+" "button, or from the left-hand-side menu choose the "New" button, or if shown “Create a new repository” button, as seen in the image below.

    10000001000003A6000002213882DE4E130632A6
  4. In the "Create a new repository" page (https://github.com/new) enter the following name for the Repository.

    github-actions-lab9
  5. Add a description as follows:

    Lab 9 GitHub Actions

    Important

    The repo is named “github-actions-lab9”. Please ensure that you name the repo exactly as asked to ensure the lab instructions work.

  6. Choose “Private” repo.

    Important

    GitHub typically defaults to “Public” repos, so make sure you choose “Private” otherwise this repo becomes public.

  7. Click "Create repository" button to create the new repo.

    You will then be redirected to the home page of the repo. The URL of the repo home page will follow the syntax outline

9.2. Clone the repository locally

In this part we will clone the repository locally, allowing you to work on the required files in your local file-system on your lab machine.

As you make changes throughout the lab, you will commit these changes and push them to the GitHub origin.

Note

This part of the lab requires that you have previously created Personal Access token to access repos in your GitHub account. If not not, do so now using the instructions detailed in the appendix at the end of this lab.

  1. Open a new Terminal session, and make sure you are in your located your user’s home directory using the following command:

    cd $HOME/Works

    You should now be in the “/home/wasadmin/Works” directory.

  2. Check you have a directory called “/home/wasadmin/Works” using the “pwd” command:

    pwd

    Result:

    /home/wasadmin/Works

    Note

    If it has not already been created, then create it using the command: “mkdir -p /home/wasadmin/Works”

  3. Clone the remote GitHub repo using the following command syntax, but replace “<your_github_username>” placeholder with your actual GitHub username:

    git clone https://github.com/<your_github_username>/github-actions-lab9.git

    Tip

    It is also possible to copy the HTTPS link used to clone your repo, by using the copy-link icon located in the “<> Code” tab of the repo in the GitHub interface.

  4. Navigate into the cloned-repo root using the “cd” command as follows:

    cd github-actions-lab9

    We will now copy existing template-code into this repository, commit, and push the changes up to GitHub.

  5. To copy the required template files in to the current directory by issuing the following command:

    cp -R /home/wasadmin/Student_Templates/lab9/* .
  6. Check that the files have been copied and are in the current folder by using the “tree” command as shown below:

    tree

    You should see the following files:

    ├── MyProject
    │   ├── Dockerfile
    │   ├── build_and_push.sh
    │   ├── build_local_image.sh
    │   ├── config.conf
    │   ├── create_env.sh
    │   ├── data_scripts
    │   │   ├── delete_all_customers.sh
    │   │   ├── delete_all_orders.sh
    │   │   ├── delete_all_products.sh
    │   │   ├── delete_customer_1.sh
    │   │   ├── delete_customer_by_id.sh
    │   │   ├── delete_order_1.sh
    │   │   ├── delete_order_by_id.sh
    │   │   ├── delete_product_1.sh
    │   │   ├── delete_product_by_id.sh
    │   │   ├── get_customers.sh
    │   │   ├── get_orders.sh
    │   │   ├── get_products.sh
    │   │   ├── insert_sample_customer_data.sh
    │   │   ├── insert_sample_order_data.sh
    │   │   ├── insert_sample_product_data.sh
    │   │   ├── sample_customer_data.json
    │   │   ├── sample_order_data.json
    │   │   └── sample_product_data.json
    │   ├── install_local.sh
    │   ├── my_app
    │   │   ├── __init__.py
    │   │   ├── app.py
    │   │   ├── templates
    │   │   │   └── index.html
    │   │   └── tests
    │   │   ├── __init__.py
    │   │   ├── test_app.py
    │   │   └── test_local.sh
    │   ├── requirements.txt
    │   ├── run_app.sh
    │   ├── run_container.sh
    │   └── run_container_local.sh
    ├── pipeline.final.yaml
    └── pipeline.initial.yaml
    
    5 directories, 36 files
  7. Check the repo status using the following command:

    git status

    Result:

    On branch main
    
    No commits yet
    
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    	MyProject/
    	pipeline.final.yaml
    	pipeline.initial.yaml
    
    nothing added to commit but untracked files present (use "git add" to track)
  8. Stage all new files, and commit them to GitHub by using the following command sequence:

    git add .
    git commit -am "Add initial files"
    git push

    Note

    Your GitHub credentials should be cached from the initial GitHub lab. However, if prompted, use your GitHub username (not your email) and use a Personal Access Token (PAT) as the password.

    We are now almost ready to configure the repo for a GitHub Actions Workflow. Before we do so, we need to define some secrets. These secrets will store details of the DockerHub account we will be creating for the deployment stage of our GitHub Actions pipeline workflow.

9.3. Creating a Docker Hub Account and Personal Access Token

In this lab our primary objective is to create a Docker image and deploy it to Docker Hub. Before we proceed with the deployment process, we need to establish authentication between GitHub and Docker Hub.

To achieve this, we will require a Docker Hub account and a Personal Access Token (PAT). The PAT will serve as the password in our GitHub Actions pipeline, and we will securely store it as a Secret within GitHub.

By implementing this authentication mechanism, we can ensure a seamless and secure deployment process.

Let’s dive into the steps involved in setting up this authentication using secrets, and explore the power of automated deployments using both GitHub and Docker Hub.

  1. Open a new browser window using the Chrome icon desktop Launcher:

    Note

    Alternatively you can use the main Ubuntu “Applications” menu, located at the top of the Ubuntu Desktop.

  2. Go to the Docker Hub website (https://hub.docker.com/), if you do not have an existing personal Docker Hub account, then register a new account. Once you have an existing verified Docker Hub account, sign-in to your account account.

    Important

    It is recommended that you create the Docker Hub account using the same personal non-corporate email that you used when you created the GitHub account.

  3. Once logged into Docker Hub, click on your profile icon in the top-right corner and select "Account Settings" from the drop-down menu as seen below:

    10000001000000C10000011E434639A7DE727981
  4. In the Account Settings page, navigate to the "Security" tab.

  5. Locate the "Access Tokens" section and click on the "New Access Token" button.

  6. Enter a name for your access token in the "Access Token Description" field. This name is for your reference and should be descriptive to identify the purpose of the token. It is a good idea to use something like “wa2917 lab9” as seen below:

    100000010000051F000002D50426A455AD0DA245

    Note

    Ensure that the Access permissions are set to “Read, Write, Delete” as shown in the image above

  7. Once you have entered an “Access Token Description”, and set the desired “Access Permissions”, click on the "Generate" button.

Docker Hub will generate a new access token for you.

  1. Copy the token using the copy-icon as seen in the image below:

    10000001000002F100000243A7278CB97DE9C36E

    We will now store the token value since it will not be shown again for security reasons.

    Important

    Make sure you copy the generated token immediately as it will not be shown again. If you make a mistake, then you can delete the token, and create another one. We will be using the token later on in the lab.

  2. Open a new terminal using Applications > Terminal option or click on the Terminal icon in the main desktop launcher.

  3. Create a new/open existing “my_tokens.txt” file in VSCode by entering the following command.

    code ~/Desktop/my_tokens.txt

    Note

    As seen in the image below, VSCode will launch, and open “my_tokens.txt”

    10000001000001F9000000C489E7A5369C8A4274

    Note

    In these images the tokens may not match because they are examples, make sure your tokens match.

  4. Add a description similar to the image above, then Paste your Docker Hub token into the file.

  5. Save the file using the File > Save menu option, then use File > Exit to close VSCode.

    The result will be a file called “my_tokens.txt” on the Ubuntu Desktop as seen below:

    10000001000000FB000001855AD02998997CB688

    We are now ready to use the new DockerHub Personal Access Token (PAT) as a secret in in GitHub.

9.4. Adding Secrets to the GitHub Repository

Now that we have successfully generated the Personal Access Token (PAT) in DockerHub, our next step is to incorporate it as a secret within GitHub.

Considering our intention to utilize multiple secrets for specific variables within our pipeline, we will add two secrets:

  • one for your DockerHub Username,
  • one for the DockerHub Personal Access token (PAT) you created in Docker Hub

To accomplish this, we will use the following steps ensuring a secure and efficient integration of these secrets into our workflow.

  1. In an Existing or New Chrome Browser session, navigate to your “github-actions-lab9” repository home-page:
    https://github.com/<your-username>/github-actions-lab9

  2. Click on the "Settings" tab for the repo.

  3. In the left sidebar, click on "Security/Secrets and Variables".

  4. Select the sub-men “Actions”.

  5. Click on the "New repository secret" button.

  6. Enter the name “DOCKERHUB_USERNAME” and use your Docker Hub username as the Secret value as seen below:

    10000001000002DD000001920D9801713C846BC0

    Double check and make sure to use the correct dockerhub username.

  7. Click “Add secret” button

  8. You will see the first secret “DOCKERHUB_USERNAME” has been added, as seen below:

    10000001000002E600000175201975DE7F00D244

    Now we will follow the same process to create the “DOCKERHUB_TOKEN” secret.

  9. Click the “new repository secret” button.

  10. Type the Name “DOCKERHUB_TOKEN” and copy/paste the recently generated Docker Hub access token as the Secret’s value as shown below:

    10000001000002D90000019DC037E8E80BD84471

    Note

    Remember we saved the token in a file named “my_tokens.txt” in the Ubuntu Desktop earlier.

  11. Click “Add secret” button

  12. You will see the first secret “DOCKERHUB_TOKEN” has been added, as seen below:

    10000001000002E5000001AE95DC9EF26969F01A

    Make sure you have followed the above steps to add two secrets.

    Now that we have the secrets are added, we can access them in GitHub Actions workflow using the syntax ${{ secrets.SECRET_NAME }}, where SECRET_NAME is the name of the secret you set up.

    To reference the Docker Hub username secret, you would use a variable in the workflow YAML for example: ${{ secrets.DOCKERHUB_USERNAME }}.

    We will now create the workflow and add the two secrets.

9.5. Setting up Workflow and File Folders in GitHub

In this part, we will set up the workflow by creating the required folders and initial pipeline.yaml

  1. Using the Existing Terminal session we used earlier in the lab, ensure that you are in the “/home/wasadmin/Works/github-actions-lab9” directory (the repo root) using the following command:

    cd /home/wasadmin/Works/github-actions-lab9

    Note

    If you don’t have a terminal session open, the open a new one and navigate to the “/home/wasadmin/Works/github-actions-lab9” directory.

  2. Create a new folder named "workflows" within a folder called “.github” using the following command:

    mkdir -p .github/workflows

    REMEMBER: When we use the “-p” switch with the mkdir command, it means the entire multi-folder path is created.

  3. Use the “tree” command to verify the correct structure exists:

    tree .github

    Result:

    .github
    └── workflows
    
    1 directory, 0 files

    We will now create a GitHub Actions workflow file named “pipeline.yaml”, and view it in the console.

  4. Copy the template file called “pipeline.initial.yaml” to create a new file called “pipeline.yaml” into the new workflows folder you created. Use the following command:

    cp ./pipeline.initial.yaml .github/workflows/pipeline.yaml
  5. Display the contents of the file using the “cat” command:

    cat .github/workflows/pipeline.yaml

    The result will be as follows:

    name: Build, Test, and Push Docker Image
    
    on:
      push:
        branches:
          - main  # Change this to your main branch name
    
    jobs:
      build-test-push:
        runs-on: ubuntu-latest
    
        steps:
          - name: Checkout code
            uses: actions/checkout@v3
    
          - name: Set up Docker Buildx
            uses: docker/setup-buildx-action@v1
    
          - name: Login to DockerHub
            uses: docker/login-action@v2
            with:
              username: ${{ secrets.DOCKERHUB_USERNAME }}
              password: ${{ secrets.DOCKERHUB_TOKEN }}
    
          - name: Build and push
            uses: docker/build-push-action@v2
            with:
              context: MyProject
              push: true
              tags: ${{ secrets.DOCKERHUB_USERNAME }}/my_app:latest
  6. Now open the YAML file (pipeline.yaml) in Visual Studio Code (VSCode) using the VSCode command-line tool:

    code .github/workflows/pipeline.yaml

    The result is “pipeline.yaml” will be displayed in the VSCode editor as seen below:

    Note

    You may need to update docker/loginaction@1 to docker/loginaction@2 as shown below:

    1000000100000362000002FC7538BFB1757B5789

    Have a read through the file noting the file is in YAML format which is recognized by VSCode. Currently, the YAML default of 2 spaces for indentation is being used.

    However, in Python, the default convention is 4 spaces, while YAML maintains the default convention of 2 spaces.

    It’s important to understand that you have the flexibility to choose any spacing you prefer in both Python and YAML, but it is crucial to maintain consistency throughout the entire file.

    Tip

    Using the default conventions is recommended as it aligns with industry standards, the majority of Python and YAML files adhere to the default conventions.

Demystifying the workflow file (pipeline.yaml)

This GitHub Actions workflow is designed to build, test, and push a Docker image to Docker Hub to demonstrate an end-to-end Pipeline.

Let’s go through each step and explain what’s happening:

  • Checkout code:
    • This step uses the actions/checkout@v3 action to clone the repository code into the workflow runner.
  • Set up Docker Buildx:
    • This step uses the docker/setup-buildx-action@v2 action to set up Docker Buildx, which is a Docker CLI plugin for building multi-platform images.
    • It enables concurrent builds and supports different platforms and architectures.
  • Login to DockerHub:
    • This step uses the docker/login-action@v2 action to log in to Docker Hub. It requires your Docker Hub username and personal access token.
    • Note: Make sure to update the secrets in your GitHub repository with your Docker Hub account details.
  • Build and push:
    • This step uses the docker/build-push-action@v2 action to build and push the Docker image.
    • It specifies the build context as the current directory (.), enables pushing the image (push: true), and sets the image tag to ${{ secrets.DOCKERHUB_USERNAME }}/my_app:latest, where DOCKERHUB_REPO is a secret that should be configured with your Docker Hub repository name.

Any push event to the specified branch (in this case, the main branch) will trigger the workflow. It will then perform the steps sequentially, building and pushing the Docker image to Docker Hub, then testing the image works.

We are now ready to commit the new files we have made, aka the GitHub Actions Workflow as specified in “pipeline.yaml”

9.6. Commit the workflow changes

In this part, we will commit our new workflow, and push to GitHub, thus triggering the Workflow.

  1. Close VSCode, using File > Exit, being sure to discard any changes you may have made to “pipeline.yaml” while you were reading it.

  2. In the Existing Terminal session, issue the following git commands to commit and push the new workflow to GitHub:

    git status
    git add .
    git commit -m "Add pipeline file"
    git push

    Tip

    Use the “git status” command, to check the status of the repo before you issue git commands.

    We will now move on to check the status of the Workflow in the GitHub Actions interface.

9.7. Monitor the Workflow Execution

You can observe the execution status of all workflows in your repository at any given time, including monitoring the execution of individual steps within an active runner. When a workflow is being executed in a runner, the workflow output displays the progress of each step, including any possible errors or warnings. If the workflow is executed successfully, it concludes with a green tick next to the corresponding run.

  1. To monitor your workflow execution, navigate to the “Actions” tab in your GitHub repository, as shown in the image below:

    100000010000039B000001D626CAAA6EB0699DB3

    Note

    The URL will be in the following format:
    https://github.com/<your_github_username>/github-actions-lab9/actions

    10000001000002EE000000FC6629D198D34EE3B0

    As we can see in the image above, all runs from each workflow in your repository will be displayed.

    If you only see “red” run entries, it means there might be a mistake in your YAML file. Please ensure that it matches the example provided.

    If the YAML syntax is correct, a “green” checkmark will appear next to the most recent event (or workflow run event) entry, which indicates a successful execution of the workflow.

    We can now move on to update the pipeline workflow (pipeline.yaml) with the required additional YAML to add another job (stage) to test the docker image and trigger another run.

  2. Ensure that the “Add pipeline” job is successful before continuing.

    Tip

    If the job fails, drill down in to the job to locate the reason in the job logs. Often the cause is in correct Docker Hub credentials/GitHub Secrets.

9.8. Add a test to the pipeline

Lets update the pipeline workflow to append a new step to the job which will run launch a docker container, using the image in docker hub, and execute a test from inside the container to check the docker image has been built correctly and the application is functioning.

  1. In a New or existing Terminal session (if you have not closed the previous Terminal session) ensure you are in the “/home/wasadmin/Works/github-actions-lab9” directory, by running the following command:

    cd /home/wasadmin/Works/github-actions-lab9
  2. Copy the template file called “pipeline.final.yaml” to replace the existing “pipeline.yaml” into the “.github/workflows folder”, using the following command:

    cp ./pipeline.final.yaml .github/workflows/pipeline.yaml
  3. Open the “pipeline.yaml” file in VSCode using the following command:

    code .github/workflows/pipeline.yaml
  4. Reading through the “pipeline.yaml” we will see the additional job labelled as “test” as seen in the image below:

    100000010000042A000004AC109FE32E3D713555
  5. One you have read through the code, exit VSCode using File > Exit

    Note

    Do not save any changes you may have made while reading through the file.

  6. Commit, and push updated “pipeline.yaml” file to GitHub

    git status
    git add .
    git commit -m "Added test job"
    git push
  7. Check out the updated workflow in the GitHub Actions interface as we did before to see a new logged run.

  8. The result will be similar to the following image below:

    10000001000002F10000014F35DBEC7FB4BFFE7E

    As seen in the image above, there is a new workflow run logged titled with the commit message of the last commit.

  9. Once the Workflow run is complete, click on the Workflow run titled
    “Added test job”, or the latest one if you have made other commit:

    You will then be taken to the details page of the run you have just clicked on, which will show the separate outputs for each job in the pipeline.

  10. Locate the “pipeline.yaml” “on:push” event, and click on the “test image” job as seen below:

    10000001000003D4000000FF784FA53EA5E7C969
  11. Expand the [anchor-24]#“Test image” job-step labeled “Run pytest inside container” to see the result of the step which tests the docker image as seen below:

    100000010000043300000195FB7DB4574E11BA75

    The result will be as follows (You may have to scroll down to see the result):

    10000001000002C7000000FC354102E4CAA3CD38
  12. Navigate to Docker Hub, and check that the image has been pushed.
    https://hub.docker.com/repositories/<your_docker_hub_username>;

  13. Click on “Repositories” in the main menu to verify the image “<your_dockerhub_username/my_app>” exists in Docker Hub as seen below:

    1000000100000444000001510079F3C146513517

    Congratulations, you have completed an end-to-end pipeline.

    You can choose to complete the optional challenge, or skip and move onto the CleanUp section.

9.9. Optional Challenge Step

The goal of this step, is to see if you can run the application by launching a container using the image that now exists in your docker hub account.

  1. Open a new or existing terminal session

  2. Navigate to the working folder “/home/wasadmin/Works/github-actions-lab9".

  3. Log in to Docker Hub using your Docker Hub username and password/token , replacing <DOCKERHUB_USERNAME> and <DOCKERHUB_TOKEN> with your actual Docker Hub credentials using the command-syntax below:

    docker login --username <DOCKERHUB_USERNAME> --password <DOCKERHUB_TOKEN>

    Note

    This command is a single line

  4. Pull the Docker image from Docker Hub, replacing <DOCKERHUB_USERNAME>/my_app with the name of your Docker Hub repository using the command-syntax below:

    docker pull <DOCKERHUB_USERNAME>/my_app:latest

    Note

    This command is a single line

  5. Start a Docker container based on the pulled image:

    docker run -d --name my_app -p 5001:5000 <DOCKERHUB_USERNAME>/my_app:latest

    Note

    This command is a single line

    This command starts a container named app-container and maps port 5001 on your local machine to port 5000 inside the container.

  6. Open a web browser window to access the running application using the following URL:
    http://localhost:5001

    Test the simple functionality of the application running in the Docker container.

  7. Once you are happy, stop and remove the container once you have finished using these commands:

    docker stop my_app
    docker rm my_app

    By following these steps above, you can pull the Docker image from Docker Hub and run the application locally using your image created by the pipeline.

    This allows you to test the functionality of the application in an environment similar to the production environment.

9.10. Cleanup

  1. Close all Terminal sessions, Open Web Browser windows and VSCode windows.

9.11. Summary

During this lab, we focused on building an end-to-end pipeline using GitHub Actions to automate the process of creating and deploying Docker images. The lab covered several aspects of the pipeline setup, including:

  • Lab Setup:
    • Created Personal Access Token for authentication.
    • Created a new repository in your GitHub account and cloned it locally.
  • Added Secrets to the GitHub Repository
  • Created a Staged Pipeline in GitHub Actions:
    • Set up a YAML file to define the pipeline stages, including code checkout, Docker setup, image building, pushing, and testing.
  • Set up Workflow and File Folders in GitHub:
    • Created the necessary folders and files in the repository.
  • Committed Workflow Changes:
    • Committed and pushed the workflow changes to the GitHub repository.
  • Monitored Workflow Execution:
    • Monitored the execution status of the workflow in the GitHub Actions interface.
  • Added a Test to the Pipeline:
    • Modified the pipeline YAML file to include a job for testing the Docker image.
  • Optionally completed the Challenge Step:
    • Launched a Docker container using the Docker image created by the pipeline and tested its functionality.

By following this lab, you gained hands-on experience in setting up a complete end-to-end pipeline with GitHub Actions, securely integrating Docker Hub, and automating the process of building and deploying Docker images.

Vulnerability Scanning & Code Coverage

Module 10

This lab demonstrates vulnerability scanning and code coverage using two tools:

“Bandit” for vulnerability scanning and “Coverage.py” for measuring code coverage.

In the first part, you will learn how to enhance the security of your Python code using Bandit. Bandit is a static analysis tool that identifies common security issues in Python applications. By running Bandit, you can detect potential vulnerabilities early on and take necessary actions to mitigate them. The lab will guide you through the installation and usage of Bandit, as well as interpreting scan results.

The second part of the lab introduces "Coverage.py, a tool for measuring code coverage. You will learn how to install and use Coverage.py to assess the extent to which your unit tests cover your code. Code coverage helps identify areas of your code that require more testing, ensuring better overall quality and reliability of your software. You will run unit tests with Coverage.py and generate coverage reports to gain insights into the effectiveness of your testing efforts.

By completing this lab, you will gain practical experience in improving the security and test coverage of your Python code, enabling you to build more robust and reliable applications.

10.1. Lab Setup

In this lab, we are going to use pre-created sample Python Flask Applications, and using Visual Studio Code (VSCode) for code editing.

  1. Open a new Terminal session using the Application’s menu located at the top of the Ubuntu Desktop, as shown below:

    100000010000015E0000009D261ADC220903B648
  2. Create a new working directory for “lab10” using the following command:

    mkdir -p /home/wasadmin/Works/lab10
  3. Navigate to the newly created directory “/home/wasadmin/Works/lab10” using the “cd” command:

    cd /home/wasadmin/Works/lab10
  4. Confirm you are in the “/home/wasadmin/Works/lab10” directory using the “Print Working Directory (pwd)” command:

    pwd

    Result:

    /home/wasadmin/Works/lab10
  5. Copy the required template lab-files into the current directory by issuing the following command:

    cp -R /home/wasadmin/Student_Templates/lab10/* .
  6. Validate that you have copied all the template files correctly using the “tree” command as follows:

    tree

    The result will be the same as below:

    .
    ├── MyDBProject
    │   ├── app.py
    │   ├── database.db
    │   ├── database_wrapper_test.py
    │   ├── run_app.sh
    │   ├── run_coverage.sh
    │   ├── gen_html_report.sh
    │   └── templates
    │   ├── customers.html
    │   └── index.html
    └── MyLoginProject
    ├── README.md
    ├── login_app.py
    ├── login_app_fixed.py
    └── templates
    ├── index.html
    └── login.html
    
    4 directories, 13 files

    We are now ready to starting the next part, which demonstrates how to initiate Vulnerability Scanning using Bandit.

10.2. Enhancing Security with Bandit

Bandit is a tool designed to find common security issues in Python code. It allows developers to identify and address potential security vulnerabilities at an early stage.

At the end of this section, you should be able to:

  • Install and operate Bandit for vulnerability scanning.
  • Comprehend and act upon the results of a vulnerability scan.
  • Carry out static code analysis on a Flask application.

In this part, we are using a simple flask app located in the “MyLoginProject” folder called “login_app.py”. This app has some known vulnerabilities.

Note

It is not required that you to understand the code fully.

  1. Navigate to the “MyLoginProject” directory using:

    cd ~/Works/lab10/MyLoginProject

    Note

    The tilde symbol (~) has a specific meaning and usage. It represents the home directory of a user, which in the lab machine ie the same as “/home/wasadmin”. Also note that the environment variable $HOME also contains the user home directory.

  2. Launch VSCode an automatically open the “login_app.py” file using the command-line tool as follows:

    code login_app.py
  3. The file’s contents will be displayed as follows:

    10000001000002AC000002D67E7653F3349DE60A

    Understanding the code:

    This app (“login_app.py”) sets up a simple Flask web application that supports a home page and a login functionality.

    • Initially, it imports necessary Flask functionalities and initializes a Flask app.
    • It then defines a function, “check_password”, which validates a user-provided password against a hardcoded password ("mysecretpassword").
    • The “home” route serves an “index.html” page when the root URL is accessed, while the ”login” route behaves differently depending on the HTTP request type:
      • If it’s a “POST” (usually when a form is submitted), the app checks the provided password and returns an "Access granted!" or "Access denied!" message.
      • If it’s a “GET” (typically when the page is accessed), the app serves a 'login.html' page.
    • The script is designed to run the application in debug mode if executed directly.
  4. Once you have finished reading “login_app.py”, close and exit using File > Exit in the VSCode menu.

10.3. Install Bandit

In this part, we will create a new virtual environment using “pyenv”, then install Bandit manually using pip.

  1. Using your Existing open Terminal session, create a new virtual environment called “lab10” using “pyenv” and activate using the following commands:

    pyenv virtualenv 3.10.7 lab10
    pyenv activate lab10

    Note

    You can deactivate the virtual environment using the command syntax: “pyenv deactivate”.

  2. Install the Bandit and Flask dependencies using issuing the following commands one after another:

    pip install Flask==2.3.2
    pip install bandit==1.7.5

10.4. Run a Vulnerability Scan with Bandit

  1. To identify the default tests Bandit utilizes to scan for vulnerabilities, execute the following command:

    bandit --help

    The result will be a list of vulnerability tests as seen below in a the redacted example of the output:

    The following tests were discovered and loaded:
    -----------------------------------------------
            B101    assert_used
            B102    exec_used
            B103    set_bad_file_permissions
            B104    hardcoded_bind_all_interfaces
            B105    hardcoded_password_string
            B106    hardcoded_password_funcarg
            B107    hardcoded_password_default
            B108    hardcoded_tmp_directory
            <REDACTED for brevity>

    Running a vulnerability scan on your Flask application using Bandit allows you to identify potential security issues in your Python code.

  2. Run the following command to start the analysis:

    bandit login_app.py

    The result will be similar to the example output below:

    10000001000004C900000466C0DABEC558B6FF15

    Note

    This command above, instructs Bandit to analyze the “login_app.py” file for potential security vulnerabilities. The “” before the filename specifies that the file is located in the current directory.

    Tip

    It is possible to use the “-r” option which makes Bandit perform a recursive search on the current directory. However, in this example, we are not using a recursive search.

    Interpreting the Vulnerability Scan Results

    After executing the vulnerability scan command Bandit provides a detailed report of its findings. The report includes information about the line number, issues identified, severity level, and confidence level of each discovered security issue.

    Looking at the output (below) more closely, we can see that there are two vulnerabilities. One vulnerability marked as Severity=Low (Blue), and the other is marked as Severity-High (Red).

    1000000100000442000001B7DAB40A3FB66736A8

    In the image above we can see that the details of each vulnerability. Each vulnerability has two links which to background and/or root causes of how they may be related to generic or specific known vulnerabilities. The pages the links refer to also offer recommendations of how these vulnerabilities may be fixed.

    The first link is to a related Common Weakness Enumeration (CWE).

    CWE is a community-developed list of software and hardware weakness types. It serves as a common language, a measuring stick for security tools, and as a baseline for weakness identification, mitigation, and prevention efforts.

  3. To look up the HIGH severity’s CWE, Open a New Browser Window by launching Chrome using the Applications menu as shown below:

    1000000100000234000000B91B0A618BA72B4CC8
  4. Navigate to the URL:
    https://cwe.mitre.org/data/definitions/94.html

    The resulting loaded page gives details of this vulnerability as per the example below:

    100000010000044300000341F70B0D8F634DD346

    The second Link is to a Bandit specific site which discusses the finer details of the Bandit Vulnerability code “b201”.

  5. To look up the Bandit “b201” code, Open a New Browser Tab/Window and navigate to the following URL:
    https://bandit.readthedocs.io/en/1.7.5/plugins/b201_flask_debug_true.html

    Note

    The URL is one single line.

    When the page is loaded, you will see the “b201” details page as per the image below:

    1000000100000441000002B28B282F354F291A9C
  6. To look up the LOW severity’s CWE, Open a New Browser Tab/Window and navigate to the URL:
    https://cwe.mitre.org/data/definitions/259.html

    The result will be a page that gives details of this vulnerability as per the example below:

    100000010000044400000372E102692159BACEB5

    To look up the Bandit “b105” code, Open a New Browser Tab/Window and navigate to the following URL:
    https://bandit.readthedocs.io/en/1.7.5/plugins/b105_hardcoded_password_string.html

    Note

    The URL is one single line.

    You will see the “b105” details page as per the image below:

    1000000100000441000003E751E2A67032B170FC

    Using the intelligence gleaned from the above URL’s we can fix the code.

    To save time, we have made the required changes to “login_app.py”, in a file called “login_app_fixed.py”. These changes are on line 7 and line 31.

  7. Open up the “login_app_fixed.py” using the VSCode command-line tool:

    code login_app_fixed.py
  8. Read through the code, noting the following changes which are as follows:

    Line 7 previous:

    hardcoded_password = "mysecretpassword"

    Changed to:

    hardcoded_password = os.environ.get('PASSWORD') #Fix for B105: hardcoded_password_string: Harcoded variable. Now uses environment variable.

    Line 31 previous:

    app.run(debug=True)

    Changed to:

    app.run(debug=False)

    We can see the entire contents of the “login_app_fixed.py” file below:

    10000001000004060000031737E2EE3BE9770602

    Now that we have reviewed the “patched” code, lets see if the vulnerabilities have been fixed?

  9. Close the “login_app_fixed.py” file using the File > Exit menu of VSCode.

  10. In the current terminal session, run the following Bandit, but this time using the “login_app_fixed.py” file:

    bandit login_app_fixed.py

    The result will be reported as follows:

    [main]	INFO	profile include tests: None
    [main]	INFO	profile exclude tests: None
    [main]	INFO	cli include tests: None
    [main]	INFO	cli exclude tests: None
    [main]	INFO	running on Python 3.10.7
    Run started:2023-07-11 19:28:49.311253
    
    Test results:
    	No issues identified.
    
    Code scanned:
    	Total lines of code: 25
    	Total lines skipped (#nosec): 0
    
    Run metrics:
    	Total issues (by severity):
    		Undefined: 0
    		Low: 0
    		Medium: 0
    		High: 0
    	Total issues (by confidence):
    		Undefined: 0
    		Low: 0
    		Medium: 0
    		High: 0
    Files skipped (0):

    We can from the output above, there are no Vulnerabilities in the “login_app_fixed.py” file according to Bandit.

    Optionally: If you are interested to run the Flask app, then use the following commands:

    set PASSWORD=mysecretpassword
    python login_app_fixed.py

    In this section of the lab, we focused on enhancing the security of our Python code. We achieved this by running Bandit on our code and interpreting the scan results to identify potential vulnerabilities. Necessary changes were made to fix the identified issues.

    By using Bandit, we gained practical experience in improving the security of our Python projects.

    General Tips when using Vulnerability Scanning Tools:

  11. Carefully review generated reports, prioritizing issues with high severity and confidence.

  12. Take note of the provided information to understand the nature of each vulnerability.

  13. Consider the recommendations provided by the tools and take necessary actions to mitigate the identified vulnerabilities.

Vulnerability scanning is a vital step in any software development lifecycle as it helps identify potential security issues in application code, thus reducing the risk of your software being exploited by malicious actors. It’s important to understand the output of the vulnerability scanning and remediate any high-risk vulnerabilities as soon as possible.

+ We will now progress on to looking at how we can assess code-coverage.

10.5. Code Coverage Overview

Code coverage is a metric that measures the extent to which your tests cover your code. Coverage.py is a tool used in Python to measure code coverage. It provides a metric to identify areas of code that have not been tested and may contain bugs. By measuring code coverage, developers can ensure that their tests effectively exercise the code, increasing the overall quality and reliability of the software

Upon completion of these next sections, you will be able to:

  • Install and use “Coverage.py” to measure code coverage of a unit test.
  • Generate a code coverage report for your unit tests.

We will be using the Python project located in the “MyDBProject” folder.

  1. In the currently open terminal session, navigate to the “/home/wasadmin/Works/lab10/MyDBProject” using the following command:

    cd /home/wasadmin/Works/lab10/MyDBProject
  2. Launch VSCode and open the “app.py” file command-line tool as follows:

    code app.py

    Have a read through the “app.py” code. Below is a brief outline of the key elements.

    Explanation of “app.py”

    The “app.py” is a Flask application that serves as a simple example of a basic customer management system. The application uses SQLAlchemy, an ORM (Object-Relational Mapper) for Python, for interacting with a SQLite database.

    Note

    We are not concerned about the code persay, it exists so we can run a unit-test to test the code coverage of that unit-test when it is run.

    For interest sake, the apps key features are briefly outlined below:

    • Database setup:
      • The application is configured to use a SQLite database “database.db”. The SQLAlchemy ORM is set up with the Flask application to interact with this database.
    • Customer Model:
      • A “Customer” model is defined with SQLAlchemy, representing a customer in the system. Each customer has an “id” and a “name”.
    • DatabaseWrapper Class:
      • A helper class ”DatabaseWrapper” is created to handle common database operations such as “add_and_commit” (for adding a new record and committing the change), “get_all” (for retrieving all records of a specific model), “delete” (for deleting a specific record), and “delete_all” (for deleting all records of a specific model).
    • Routes:
      • Two routes are defined - the root route (“/”) and a “/customers” route.
      • The root route retrieves all customers from the database and renders them on the “index.html” page.
      • The ”customers” route retrieves all customers and displays them on the “customers.html” page.
    • Database setup before each request:
      • Before handling each request, the application checks if the database (specifically the “Customer” table) has been set up.
      • If it has not, the application creates the necessary tables and inserts some sample customers into the “Customer” table.
    • Running the application:
      • The application runs with debugging enabled when executed directly.

        Note

        This application provides a simple and minimal example of using Flask with SQLAlchemy for managing data in a SQLite database, and could serve as a starting point for developing more complex applications.

  3. Close the “app.py” file using File > Exit in the VSCode menu.

    Lets have a quick look at the unit-test called “database_wrapper_test.py”, which we will evaluate using “Coverage.py”, to see how much code the test covers.

  4. Launch VSCode and open the “database_wrapper_test.py” using the command-line tool as follows:

    code database_wrapper_test.py

    The contents of the “database_wrapper_test.py” unit-test can be seen in the image that follows:

    100000010000037A000004A9DEE81B634856816E

    As we can see in the image above, this unit-test case tests the four methods of the “DatabaseWrapper” class:

    • test_add_and_commit:
      • Verifies that a model instance is added and committed to the session.
    • test_get_all:
      • Verifies that all instances of a model are retrieved.
    • test_delete:
      • Verifies that a model instance is deleted and the change is committed to the session.
    • test_delete_all:
      • Verifies that all instances of a model are deleted.

        We will now install some needed dependencies that will allow us to leverage “Coverage.py”.

  5. Once you have finished reading the code, close the “database_wrapper_test.py” using the File > Exit menu of VSCode.

10.6. Install Coverage.py & Generate Report

  1. Using the same Terminal Session, install “Coverage.py”, “Flask” and “SQLAlchemy”, execute the following commands:

    pip install coverage==7.2.7
    pip install Flask==2.3.2
    pip install Flask-SQLAlchemy==3.0.5
  2. To measure the code coverage of your unit tests using “Coverage.py”, use the following command:

    coverage run -m unittest discover

    As a result a binary file called “.coverage” will be created.

  3. Use the following command to verify that “.coverage” file was generated:

    ls .coverage

    the result is:

    .coverage

    The binary “.coverage” file is generated by the “Coverage.py” tool after you run your tests under coverage. It’s a data file that contains the detailed coverage information for each file in your project that was exercised during the test run.

    Coverage data includes:

    • Which lines of code were executed
    • How many times each line was executed
    • Other details necessary to generate coverage reports

      The file is in a binary format that is readable by “Coverage.py”, and it’s not intended to be human-readable. You typically don’t need to interact with the .coverage file directly. Instead, you use “Coverage.py” commands to generate reports from this data, which provides insight into how thoroughly your tests are exercising your code.

      The following are some of the common commands you can use with “Coverage.py” to generate reports:

    • coverage report
      • This generates a basic report in the console that shows the coverage percentage for each file.
    • coverage html
      • This generates an HTML report in a directory named htmlcov. The report provides a detailed and interactive view of which lines were covered in each file.
    • coverage xml
      • This generates an XML report that can be used by other tools for further analysis or for integration with continuous integration systems.

        Note

        The “.coverage” file is usually not included in source control, and is often listed in the .gitignore file for Python projects.

        To simplify running the correct commands to generate a text-based coverage report, using the data within the “.coverage” file, we have created a bash-script called “run_coverage.sh”

  4. Use the “cat” command to list the contents of “run_coverage.sh”:

    cat run_coverage.sh

    the resulting contents of “run_coverage.sh” is displayed below:

    #!/bin/bash
    coverage run --source=. -m unittest discover -p '*_test.py' && coverage report --omit=app.py

    Explanation:

    This command is running unit tests with coverage in Python and then generating a coverage report. The command then uses the “Coverage.py” tool to measure code coverage of Python programs.

    Here’s a breakdown of what each part of the command does:

    coverage run -source=.

    This part of the command runs your Python program under coverage. The “--source=.`”option tells “Coverage.py” to only measure coverage for Python files in the current directory (“.”) and its subdirectories.

    -m unittest discover -p '*_test.py'

    This part is a set of options for the “coverage run” command. The “-m unittest discover” command tells Python to run the unittest module’s test discovery system, which will discover and run test files. The “-p '*_test.py'” option tells the test discovery system to only discover files that match the pattern “*_test.py” (i.e., any Python file whose name ends with “_test.py”).

    &&

    This is a shell operator that tells the shell to execute the second command only if the first command completed successfully. If the first command fails (i.e., if any of the unit tests fail), then the second command will not be run.

    coverage report –omit=app.py

    This part of the command generates a coverage report. The “--omit=app.py” option tells “coverage.py” to omit the “app.py” file when generating the report. This means that the report will not include coverage data for “app.py”.

    So, in summary, this command will discover and run all unit tests in Python files that end with “_test.py”, measure the code coverage of these tests for all Python files in the current directory and its subdirectories, and then generate a coverage report that omits “app.py”. If any of the unit tests fail, then the coverage report will not be generated.

  5. To generate a code coverage report using “Coverage.py”, issue these two commands to make the script executable, and then run the script:

    chmod +x run_coverage.sh
    ./run_coverage.sh

    The result wil be as similar out as follows:

    ----------------------------------------------------------------------
    ----------------------------------------------------------------------
    Ran 4 tests in 0.139s
    
    OK
    Name                       Stmts   Miss  Cover
    ----------------------------------------------
    database_wrapper_test.py      43      1    98%
    ----------------------------------------------
    TOTAL                         43      1    98%

    The report provides information about the number of statements (“Stmts”), the number of missed statements (“Miss”), and the coverage percentage (“Cover”) for each file.

    Important

    The goal is to create unit-tests which aim for high coverage percentages to ensure comprehensive testing of your code.

    It is also possible to generate html-based reports.

    Similarly, to simplify running the correct commands to generate an HTML coverage report, using the data within the “.coverage” file, we have created a bash-script called “gen_html_report.sh”

  6. Use the “cat” command to list the contents of “run_coverage.sh”:

    cat gen_html_report.sh

    Resulting contents of “gen_html_report.sh” is displayed as follows:

    #!/bin/bash
    coverage run --source=. -m unittest discover -p '*_test.py' && coverage html --omit=app.py
  7. To generate a code coverage HTML report using “Coverage.py”, issue these two commands to make the script executable, and then run the script:

    chmod +x gen_html_report.sh
    ./gen_html_report.sh

    The resulting output is as follows:

    ----------------------------------------------------------------------
    Ran 4 tests in 0.140s
    
    OK
    Wrote HTML report to htmlcov/index.htm
  8. Double-click on the Ubuntu Desktop Home icon, or use the Ubuntu Application menu to launch the File-Explorer app (Files) as shown in the two options below:

    10000001000000EF00000097D1EB73A85FA8533C
    1000000100000150000000971A22CF7B1973E35E
  9. Navigate through the file-system to locate the report (index.html) which is in a folder called “htmlcov” right click on it and select “Open with Google Chrome”, which is located in “Home/Works/lab10/MyDBProject/htmlcov” as seen below:

    100000010000047D0000022A579A3305E542EFD2
  10. Click on the “database_wrapper_test.py” link inside “index.html” to open the details page as seen below:

    10000001000002AD0000012DECD495371DCDD5A6
  11. The resulting page displays a detailed coverage report for the “database_wrapper_test.py” unit test, which contains coverage information very similar to the image below:

    1000000100000455000002013239281102775B86
  12. Scroll down, and take a look at the report details.

    We can see the Unit Test is reported as covering 98% of the code.

    By following and completing this section of the lab, you learned how to use “Coverage.py” to measure code coverage for your unit tests.

    Code coverage analysis helps you identify areas of your code that require more testing and ensures better overall quality and reliability of your software.

    Well done!, The lab is now complete.

10.7. Cleanup

  1. Deactivate the “pyevn” virtual environment in the Terminal where you have been doing the work above using the following command (run it twice):

    pyenv deactivate lab10

    Result:

    yenv-virtualenv: no virtualenv has been activated.
  2. Close all open Terminal sessions, and close all open application windows (e.g. VSCode, Chrome, etc).

10.8. Summary

This lab focused on vulnerability scanning and code coverage.

It covered two tools: Bandit for vulnerability scanning and “Coverage.py” for measuring code coverage.

In the first part, participants learned about Bandit, a static analysis tool for identifying security issues in Python applications. Bandit was installed and used to scan for vulnerabilities, and the results were interpreted to address potential risks.

The second part focused on “Coverage.py”, a tool for measuring code coverage. Participants installed “Coverage.py”, ran unit tests with coverage, and generated coverage reports. These reports helped identify areas of the code that required more testing.

Overall, the lab aimed to equip participants with practical skills to manage Python projects effectively, enhance security, and ensure comprehensive code coverage for better software quality.

Monolith vs Microservices Design

Module 11

In this lab, you will explore two different application architectures for a car rental system:

  • A monolithic architecture
  • A microservices-based architecture.

You will analyze their designs, identify key differences, and answer questions to deepen your understanding of the concepts.

Some questions may have multiple valid answers, thus providing an opportunity to discuss and share insights with your peers.

11.1. Car Rental Scenario

Imagine you’re part of a fictitious car rental company with an online platform that allows users to browse and rent cars.

Two potential architectures have been proposed:

The first is a traditional monolithic application design, with a high-level architecture as shown below in Figure 1.

  1. Examine the design of a monolithic car rental application where all functionalities are integrated within a single codebase and deployed as a single unit.

    The second is an application design leveraging microservcies, with a high-level architecture that could look like Figure 2, below.

  2. Investigate a microservices-based car rental application where functionalities are modularized into independent services, communicating with each other through APIs.

    10000001000002E6000001A57AD2C8E0DDC3B251

    Fig 1. Application Architecture 1: Traditional Enterprise Monolithic Architecture Example.

    10000001000002B500000234C1B3260DEF34693F

    Fig. 2 Application Architecture 2: Microservices-based Architecture Example.

11.2. Breaking up the Monolith

  1. Compare the Monolith Application Architecture 1 (Fig.1) with the Microservices Application Architecture 2 (Fig.2), and identify some of the differences.

  2. What kind of Quality of Service properties we gain / lose by going from Architecture 1 to Architecture 2?

  3. Which type of architecture more easily support scalability?

  4. What would be involved if you need to change a single file/component in the app code for Architecture 1 vs Architecture 2?

  5. In which of the architectures do you need to have more powerful computers? Is it about CPU, RAM, or both?

  6. You need to recycle the machines hosting the applications. Which architecture will have a shorter application start up time?

11.3. The 12-Factor App

Go through the twelve-factor app principles [https://12factor.net/] and try to see how, if at all, any methodologies have be applied to each App Architecture (Fig. 1 and Fig.2).

For example, you may notice that the Application in Fig.1 may be hard to scale e.g breaking principle (VIII) because as all code is bundled together (in one bundle / component).

Note

Not all factors can be identified here in this exercise.

11.4. Merits and Drawbacks of Microservices

We’ve curated a series of thought-provoking questions on microservices to foster a deeper understanding of the topic. As you assess each question, consider both your prior knowledge and personal experiences.

Engage in discussions and freely share your insights. There’s no right or wrong response; the aim is to contemplate various perspectives, evaluate potential scenarios, and enrich our collective understanding of microservices.

  1. Consider the following statement:

    "Since my application operates in its unique runtime environment, frequently within a container or a type of virtual machine, I can obtain robust process isolation."

    Does this statement correspond to the concept of microservices? If so, is it advocating for or against its usage?

  2. Reflect on this assertion:

    "Different services' change cycles can be effectively decoupled, allowing Service A to be re-deployed without influencing Service B."

    Does this pertain to the microservices architectural pattern? If so, is this an argument in favor or against its implementation?

  3. Consider the following statement:

    "I am responsible for managing the performance overhead associated with inter-process communication,"

    Is it connected with microservices architecture? If so, does it indicate a reason for or against adopting microservices?

  4. Evaluate this assertion:

    "Deployment cycles and developer productivity are enhanced due to the smaller scope of each service."

    Is this relevant to the concept of microservices? If so, does it constitute a supporting argument for or against its adoption?

  5. Reflect on this statement:

    "This type of application architecture enables scalability on a per service or per tier basis."

    Is it related to the microservices architecture? If so, does it favor or oppose the adoption of microservices?

  6. Consider this statement:

    "My application exhibits outstanding performance and quick service interactions while maintaining a compact security perimeter."

    Is this statement linked to the microservices architecture? If so, is it advocating for or against the use of microservices?

  7. Reflect on the following assertion:

    "The significant operational overhead, including meticulously coordinated deployment and distributed monitoring, results from the increased number of individual components."

    Is this statement relevant to microservices architecture? If yes, does it argue for or against the use of microservices?

  8. Evaluate the following statement:

    "My application consolidates all necessary services in a single location, deploying them via a unified archive. The application adheres to standard SOA principles, with services corresponding to specific business capabilities and accessible through a well-defined WSDL-based web service interface."

    Is this related to the microservices architectural pattern? If so, is it an argument in favor or against its implementation?

  9. Consider this statement:

    "Now, it’s more like a nanoservice, offering increased granularity and modularity. Isn’t that impressive?"

    Does this pertain to microservices architecture? If so, does this present an argument in favor or against its adoption?

  10. Reflect on this statement:

    "Smaller development teams can be allocated to create and manage individual services, leading to quicker onboarding and development cycles."

    Does this statement correspond to the microservices architecture? If so, is it advocating for or against its usage?

  11. Evaluate the following statement:

    "With this architectural design, any state must be stored in a dependable external service, like a caching service."

    Is this relevant to microservices? If so, does it constitute a supporting argument for or against its adoption?

11.5. Summary

In this lab, we compared two types of application architecture: the monolith and one based on microservices.

Question 4 - Sample Answers

Module 12

As my app is deployed and executed in its own run-time (often in a container or a VM of sorts), I can get stronger process isolation properties.

Argument in Favor

Microservices architecture allows for robust process isolation as each service operates in its own runtime environment, promoting better security, fault isolation, and resilience.

Change cycles for different services can be more easily decoupled. For example, I can re-deploy Service A without affecting Service B.

Argument in Favor

Microservices enable independent deployment of services, facilitating more flexible and rapid change cycles. One service can be modified without impacting others, enhancing agility and minimizing disruption.

I have to deal with performance overhead related to inter-process communication.

Argument Against

Microservices introduce overhead from inter-process communication, which can negatively impact performance. Concerns such as network latency and message serialization need to be effectively managed to mitigate this overhead.

Deployment cycles and developer velocity are faster (due to the smaller footprint of the service).

Argument in Favor**

Microservices typically lead to quicker deployment cycles and increased developer velocity. The smaller scope of each service, reduced codebase, and minimized dependencies allow for speedier development, testing, and deployment.

With this type of app, I can do scaling per service / per tier.

Argument in Favor

Microservices permit granular scaling. Each service can be scaled independently based on its specific resource demands, promoting better resource utilization and improved scalability.

My app boasts a fantastic performance and fast service interactions; it also has a nice and small security perimeter.

Argument in Favor

Microservices can contribute to better performance and quicker service interactions due to smaller codebases and reduced dependencies. Additionally, the smaller security perimeter of each service simplifies security management.

Now I have to deal with a considerable operational overhead (the accidental complexity) with more distinct moving parts that require precisely orchestrated deployment and distributed monitoring.

Argument Against

Microservices introduce increased operational overhead due to the complexity of managing multiple services. Careful orchestration, automation of deployment, and distributed monitoring are necessary to ensure smooth functioning of the system.

My application has all the required services housed in one place and they get deployed using a single archive. The app is designed using the standard SOA principles: its services are mapped to distinct business capabilities, the services are narrow, composable, and accessible through a well-defined WSDL-based web service interface.

Argument Against

This statement appears to describe a traditional Service-Oriented Architecture (SOA) rather than microservices. In SOA, services are often co-located and deployed as a single archive, whereas microservices promote individual deployment and independent services.

Now it is more of a nanoservice, really. Which is really cool, right?

Argument in Favor

The statement suggests a 'nanoservice' approach, which implies even greater granularity and modularity than typical microservices. This can be advantageous in certain scenarios but also may introduce complexities such as increased communication overhead and management complexity.

Smaller development teams can be set up to develop and maintain the services, which leads to faster getting-up-to-speed and development cycles.

Argument in Favor

Microservices allow for smaller, focused development teams per service, which can lead to faster onboarding, greater team autonomy, and quicker development cycles.

Now, with this type of design, you must persist any state in a reliable external (e.g., caching) service.

Argument in Favor

In a microservices architecture, state should be managed externally, for instance, in a database or caching service. This can improve service statelessness, scalability, and resilience, and supports the principles of microservices.